The conventional single-target Cross-Domain Recommendation (CDR) only improves the recommendation accuracy on a target domain with the help of a source domain (with relatively richer information). In contrast, the novel dual-target CDR has been proposed to improve the recommendation accuracies on both domains simultaneously. However, dual-target CDR faces two new challenges: (1) how to generate more representative user and item embeddings, and (2) how to effectively optimize the user/item embeddings on each domain. To address these challenges, in this paper, we propose a graphical and attentional framework, called GA-DTCDR. In GA-DTCDR, we first construct two separate heterogeneous graphs based on the rating and content information from two domains to generate more representative user and item embeddings. Then, we propose an element-wise attention mechanism to effectively combine the embeddings of common users learned from both domains. Both steps significantly enhance the quality of user and item embeddings and thus improve the recommendation accuracy on each domain. Extensive experiments conducted on four real-world datasets demonstrate that GA-DTCDR significantly outperforms the state-of-the-art approaches.
CITATION STYLE
Zhu, F., Wang, Y., Chen, C., Liu, G., & Zheng, X. (2020). A graphical and attentional framework for dual-target cross-domain recommendation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3001–3008). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/415
Mendeley helps you to discover research relevant for your work.