On Generative Agents in Recommendation
we envision a recommendation simulator, capitalizing on recent breakthroughs in human-level intelligence exhibited by Large Language Models (LLMs). We propose Agent4Rec, a user simulator in recommendation, leveraging LLM-empowered generative agents equipped with user profile, memory, and actions modules specifically tailored for the recommender system. In particular, these agents’ profile modules are initialized using real-world datasets (e.g., MovieLens, Steam, Amazon-Book), capturing users’ unique tastes and social traits; memory modules log both factual and emotional memories and are integrated with an emotion-driven reflection mechanism; action modules support a wide variety of behaviors, spanning both taste-driven and emotion-driven actions. Each agent interacts with personalized recommender models in a page-by-page manner, relying on a pre-implemented collaborative filtering-based recommendation algorithm. We delve into both the capabilities and limitations of Agent4Rec, aiming to explore an essential research question: “To what extent can LLM-empowered generative agents faithfully simulate the behavior of real, autonomous humans in recommender systems?” Extensive and multi-faceted evaluations of Agent4Rec highlight both the alignment and deviation between agents and user-personalized preferences. Beyond mere performance comparison, we explore insightful experiments, such as emulating the filter bubble effect and discovering the underlying causal relationships in recommendation tasks.
we introduce Agent4Rec — a general user simulator in recommendation scenarios, which consists of two core facets: LLM-empowered generative agents and recommendation environment (cf. Figure 2). From the user’s perspective, we simulate 1,000 LLM-empowered generative agents per recommendation scenario, where each agent is initialized based on real-world datasets and composed of three essential modules: the user profile, memory, and action modules. The profile module functions as a repository for personalized social traits and historical preferences [38], facilitating the alignment of user portraits with genuine human characteristics. The memory module records past viewing behaviors, system interactions, and emotional memories (i.e., user feelings and fatigue levels) in natural language, enabling information retrieval, preference accumulation, and emotion-driven reflection in a coherent manner. The action module empowers these agents to interact directly with the recommendation environment, including taste-driven actions (i.e., viewing or ignoring recommended movies, rating, generating post-viewing feelings), and emotion-driven actions (i.e., exiting the system, evaluating recommendation lists, and expressing human-understandable comments). From the perspective of the recommender system simulation, items are recommended by a predetermined recommendation algorithm,