Learning to Rank for Recommender Systems

Paper · Source
Recommenders Architectures

“Various models for implicit feedback data use learning to rank [4] techniques to optimize binary relevance data ranking metrics. For example, several CF models [7, 9, 10] compute near optimal ranked lists with respect to the Area Under the Curve (AUC), Average Precision (AP) [5] and Reciprocal Rank [11] metrics. However, metrics that are defined to handle binary relevance data are not directly suitable for graded relevance data. Binary metrics, and CF methods that optimize for these metrics can be used on graded relevance data if it is converted to binary relevance by e.g., imposing a threshold (e.g., setting rating 4 as the threshold for the 1-5 scale so that items rated 4 and 5 are treated as relevant). This process has two major drawbacks: 1) we lose grading information within the rated items, e.g., items rated with a 5 are more relevant then items rated with a 4. This information is crucial in building precise models. 2) the choice of the thresholding relevance is arbitrary and will have an impact on the performance of different recommendation approaches.”