LLM-Rec: Personalized Recommendation via Prompting Large Language Models
The use of large language models in recommender systems has garnered significant attention in recent research. Numerous studies have explored the direct use of LLMs as recommender models. The underlying principle of these approaches involves constructing prompts that encompass the recommendation task, user profiles, item attributes, and user-item interactions. These task-specific prompts are then presented as input to the LLM, which is instructed to predict the likelihood of interaction between a given user and item.
we approach the problem from a different perspective. Rather than using LLMs as recommender models, this study delves into the exploration of prompting strategies to augment input text with LLMs for personalized content recommendation. By leveraging LLMs, which have been fine-tuned on extensive language datasets, we seek to unlock their potential in generating high-quality and context-aware input text for enhanced recommendations.
We consider three basic prompting and refer to them as p1, p2, and p3, respectively in the following experiments.
• p1: This prompt instructs LLM to paraphrase the original content description, emphasizing the objective of maintaining the same information without introducing any additional details.
• p2: This prompt instructs LLM to summarize the content description by using tags, aiming to generate a more concise overview that captures key information.
• p3: This prompt instructs LLM to deduce the characteristics of the original content description and provide a categorical response that operates at a broader, less detailed level of granularity.
“We investigate various prompting strategies for enhancing personalized content recommendation performance with large language models (LLMs) through input augmentation. Our proposed approach, termed LLM-Rec, encompasses four distinct prompting strategies: (1) basic prompting, (2) recommendation-driven prompting, (3) engagement-guided prompting, and (4) recommendation-driven engagement-guided prompting. Our empirical experiments show that combining the original content description with the augmented input text generated by LML using these prompting strategies leads to improved recommendation performance. This finding highlights the importance of incorporating diverse prompts and input augmentation techniques to enhance the recommendation capabilities with large language models for personalized content recommendation.”