Dynamically Expandable Graph Convolution for Streaming Recommendation
“Due to the real-world dynamics like user preference continuous shift and ever-increasing users and items, conventional recommender systems trained on the static fixed datasets usually suffer from: predicting previous interactions and preferences, disregarding trends and shifting preferences, and ignoring real industrial constraints like few time and limited resources. To tackle these challenges, streaming recommendation is proposed in which data and recommendation model are both updated dynamically along the timeline [8, 10, 15, 16, 47, 48, 53]. Early works recommend items to users based on the popularity, recency, and trend analysis [7, 30, 49] but pay few attention to the collaborative signal distilling. To extracting such information, some other works [8, 16, 18, 44] introduce the classical recommendation algorithms like collaborative filtering and machine factorization into the streaming setting. In addition, there are also some recent works from the perspectives of online clustering of bandits and collaborative filtering bandits [2, 19, 20, 27, 28] to perform streaming recommendation. Thanks to the great success of graph neural network on complex relationship modeling, how to apply GCN-based recommendation models to the streaming recommendation is attracting more and more attention recently [1, 51, 52, 56, 61]. Besides, streaming recommendation algorithms have been successfully deployed to industrial online service platforms like Google, Huawei, and Tencent [5, 15, 46]. However, for a long time, there lacks a standardized definition to streaming recommendation, especially in the deep model-based recommendation setting. In this paper, we draw intuitions from previous research and most recent progress, and then summarize a definition of the streaming recommendation.
2.2 Continual Learning
Continual learning was originally paid great attention in computer vision and nature language process areas in which different tasks come in sequence. Various methods have been proposed to prevent catastrophic forgetting and effectively transfer knowledge. The mainstream continual learning algorithms can be classified into three categories: experience replay [9, 25, 34, 38, 41], knowledge distillation/model regularization [17, 23, 24, 40], and model isolation [21, 35, 39, 45, 59, 60, 65]. Continual learning is often regarded as a trade-off between knowledge retention (stability) and knowledge expansion (plasticity) [35], and model isolation-based methods provide an more explicit control over such trade-off. Considering that graph-based models have been widely studied.”