Building User Trust in Recommendations via Fairness and Explanations

5Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modern Artificial Intelligence (AI) techniques, based on the statistical analysis of big volumes of data, are quickly gaining traction across various domains. Recommender Systems are a class of AI techniques that extract preference patterns from large traces of human behavior. Recommenders assist people in taking decisions that range from harmless everyday life dilemmas, e.g., what shoes to buy, to seemingly innocuous choices but with long-term, hidden consequences, e.g., what news article to read, up to more critical decisions, e.g., which person to hire. As more and more aspects of our everyday lives are influenced by automated decisions made by recommender systems, it becomes natural to question whether these systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. These questions are timely posed in the broader context of concerns regarding the societal and ethical implications of applying AI techniques, which have also brought about new regulations, like the EU's "Right to Explanation". In this talk, we discuss techniques for increasing the user's trust in the decisions of a recommender system, focusing on fairness aspects and explanation approaches. On the one hand, fairness means that the system exhibits certain desirable ethical traits, such as being non-discriminatory, diversity-aware, and bias-free. On the other hand, explanations provide human-understandable interpretations of the inner working of the system. Both mechanisms can be used in tandem to promote trust in the system. In addition, we investigate user trust from the standpoint of different stakeholders that potentially have varying levels of technical background and diverse needs.

Cite

CITATION STYLE

APA

Sacharidis, D. (2020). Building User Trust in Recommendations via Fairness and Explanations. In UMAP 2020 Adjunct - Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 313–314). Association for Computing Machinery, Inc. https://doi.org/10.1145/3386392.3399995

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free