A Look Inside the Black-Box: Towards the Interpretability of Conditioned Variational Autoencoder for Collaborative Filtering

8Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning-based recommender systems are nowadays defining the state-of-the-art. Unfortunately, their hard interpretability restrains their application in scenarios in which explainability is required/desirable. Many efforts have been devoted to injecting explainable information inside deep models. However, there is still a lot of work that needs to be done to fill this gap. In this paper, we take a step in this direction by providing an intuitive interpretation of the inner representation of a conditioned variational autoencoder (C-VAE) for collaborative filtering. The interpretation is visually performed by plotting the principal components of the latent space learned by the model on MovieLens. We show that in the latent space conditions on correlated genres map users in close clusters. This characteristic enables the model to be used for profiling purposes.

Cite

CITATION STYLE

APA

Carraro, T., Polato, M., & Aiolli, F. (2020). A Look Inside the Black-Box: Towards the Interpretability of Conditioned Variational Autoencoder for Collaborative Filtering. In UMAP 2020 Adjunct - Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 233–236). Association for Computing Machinery, Inc. https://doi.org/10.1145/3386392.3399305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free