Improving Conversational Recommender Systems via Transformer-based Sequential Modelling

Paper · Source
Recommenders Conversational

In Conversational Recommender Systems (CRSs), conversations usually involve a set of related items and entities e.g., attributes of items. These items and entities are mentioned in order following the development of a dialogue. In other words, potential sequential dependencies exist in conversations. However, most of the existing CRSs neglect these potential sequential dependencies. In this paper, we propose a Transformer-based sequential conversational recommendation method, named TSCR, which models the sequential dependencies in the conversations to improve CRS. We represent conversations by items and entities, and construct user sequences to discover user preferences by considering both mentioned items and entities.

it is common to mention entities that are closely related to the items to be recommended, e.g., director and genre of a movie in our example. Second, the mentioned items (and also entities) naturally form a sequence, following the development of the conversation. That is, there is an order impact among items in a conversation and potential sequential dependencies within the conversation. For instance, when people talk about a movie (e.g., Fast & Furious 1), it is natural and reasonable to recommend a sequel or prequel of that movie (e.g., Fast & Furious 4). We argue that the modeling of such sequential dependency can well capture the context

As a rapidly growing research topic in recent years, a number of solutions for CRS have been proposed [5, 12, 13, 32]. One attempt is based on the “system asks – user responds” mode and simulates conversations by using some “anchor” text, e.g., item aspects [26], entities [31–35], facets or attributes [7, 8, 17, 29]. They usually utilize a belief tracker to infer the anchor-based preferences to improve CRS. Another mainstream approach is based on human-generated dialogues [1, 2, 10, 11, 14, 20, 21, 24, 25, 28, 30]. For instance, at the early stage, Li et al. [9] proposed a CRS dataset, ReDial, and presented a benchmark model for item recommendation. ReDial soon became the most widely used dataset for CRS. Given that entities within ReDial utterances are linked to a knowledge base, most subsequent work uses knowledge graphs to improve CRS [2, 11, 14, 27, 28, 30]. Although the aforementioned work demonstrates success to some extent, they neglect the order impact among items or entities discussed earlier. Hence, these existing solutions do not model the potential sequential dependency within the conversations.