GenRec: Large Language Model for Generative Recommendation
“Traditionally, recommendation systems have been built around methods such as collaborative filtering [5, 6, 14], content-based filtering [16, 18], and hybrid approaches [1, 11]. Collaborative filtering leverages user-item interactions, making suggestions based on patterns found in the behavior of similar users or items. On the other hand, content-based filtering uses item features to recommend similar items to those a user has previously interacted with. Hybrid methods attempt to combine the strengths of these two approaches to overcome their respective limitations. Despite the progress made with these traditional techniques, there still have some significant challenges. For instance, collaborative filtering struggles with the cold start problem, where it fails to provide accurate recommendations for new users or items due to lack of historical interaction data. Both content-based filtering hard to handle the issue of data sparsity, given that most users interact with only a small fraction of the total items available. Additionally, because of the computational complexity of processing large interaction matrices, these models often struggle to scale effectively with the growth of users and items.
The integration of text-based LLMs into recommendation systems presents an exciting opportunity to address these challenges [3]. These models can learn and understand complex patterns in human language, which allows for a more nuanced interpretation of user preferences and a more sophisticated generation of recommendations. However, a significant number of the prevailing recommendation models are trained using user and item indexes. This approach leads to the lack of text-based information in the dataset, including details like item titles and category information.
In this paper, we propose a novel large language model for generative recommendation (GenRec). One of the primary benefits of the GenRec model is that it capitalizes on the rich, descriptive information inherently contained within the item names, which often contain features that can be semantically analyzed, enabling a better understanding of the item’s potential relevance to the user. This could potentially provide more accurate and personalized recommendations, thereby enhancing the overall user experience.”
The architecture of the proposed framework is illustrated in Figure 1. Given a user’s item interaction sequence, the large language model for generative recommendation (GenRec) will format the item names with a prompt. This reformatted sequence is subsequently employed to fine-tune a Large Language Model (LLM). The adjusted LLM can then predict subsequent items the user is likely to interact with. In our paper, we select the LLaMA [17] language model as the backbone. However, our framework retains flexibility, allowing for seamless integration with any other LLM, thus broadening its potential usability and adaptability.