Abstract
Modern recommender systems are increasingly expected to provide informative explanations that enable users to understand the reason for particular recommendations. However, previous methods struggle to interpret the input IDs of user-item pairs in real-world datasets, failing to extract adequate characteristics for controllable generation. To address this issue, we propose disentangled conditional variational autoencoders (CVAEs) for explainable recommendation, which leverage disentangled latent preference factors and guide the explanation generation with the refined condition of CVAEs via a self-regularization contrastive learning loss. Extensive experiments demonstrate that our method generates high-quality explanations and achieves new state-of-the-art results in diverse domains.
Cite
CITATION STYLE
Wang, L., Cai, Z., de Melo, G., Cao, Z., & He, L. (2023). Disentangled CVAEs with Contrastive Learning for Explainable Recommendation. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 13691–13699). AAAI Press. https://doi.org/10.1609/aaai.v37i11.26604
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.