Selective Fairness in Recommendation via Prompts

29Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Recommendation fairness has attracted great attention recently. In real-world systems, users usually have multiple sensitive attributes (e.g. age, gender, and occupation), and users may not want their recommendation results influenced by those attributes. Moreover, which of and when these user attributes should be considered in fairness-aware modeling should depend on users' specific demands. In this work, we define the selective fairness task, where users can flexibly choose which sensitive attributes should the recommendation model be bias-free. We propose a novel parameter-efficient prompt-based fairness-aware recommendation (PFRec) framework, which relies on attribute-specific prompt-based bias eliminators with adversarial training, enabling selective fairness with different attribute combinations on sequential recommendation. Both task-specific and user-specific prompts are considered. We conduct extensive evaluations to verify PFRec's superiority in selective fairness. The source codes are released in \urlhttps: //github.com/wyqing20/PFRec.

Author supplied keywords

Cite

CITATION STYLE

APA

Wu, Y., Xie, R., Zhu, Y., Zhuang, F., Xiang, A., Zhang, X., … He, Q. (2022). Selective Fairness in Recommendation via Prompts. In SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2657–2662). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3531913

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free