This paper tackles the problem of providing users with ranked lists of relevant search results, by incorporating contextual features of the users and search results, and learning how a user values multiple objectives. For example, to recommend a ranked list of hotels, an algorithm must learn which hotels are the right price for users, as well as how users vary in their weighting of price against the location. In our paper, we formulate the context-aware, multi-objective, ranking problem as a Multi-Objective Contextual Ranked Bandit (MOCR-B). To solve the MOCR-B problem, we present a novel algorithm, named Multi-Objective Utility-Upper Confidence Bound (MOU-UCB). The goal of MOU-UCB is to learn how to generate a ranked list of resources that maximizes the rewards in multiple objectives to give relevant search results. Our algorithm learns to predict rewards in multiple objectives based on contextual information (combining the Upper Confidence Bound algorithm for multi-armed contextual bandits with neural network embeddings), as well as learns how a user weights the multiple objectives. Our empirical results reveal that the ranked lists generated by MOU-UCB lead to better click-through rates, compared to approaches that do not learn the utility function over multiple reward objectives.
CITATION STYLE
Wanigasekara, N., Liang, Y., Goh, S. T., Liu, Y., Williams, J. J., & Rosenblum, D. S. (2019). Learning multi-objective rewards and user utility function in contextual bandits for personalized ranking. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 3835–3841). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/532
Mendeley helps you to discover research relevant for your work.