Abstract
Traditional recommender systems, such as collaborative filtering and content filtering, have inherent limitations, including cold start issues and challenges in filtering information. Additionally, the sparsity and noise in news data significantly affect recommendation accuracy. To address these issues, this paper introduces a multi-head self-attention mechanism to capture both long-term and local dependencies within user sequences. A Multimodal Variational Autoencoder (MVAVE) is constructed based on the multi-head self-attention mechanism, incorporating noise injection for denoising to mitigate the impact of data noise on recommendation performance. Building upon MVAVE, an adversarial multimodal data sequence recommender system is further developed by integrating MVAVE with a Generative Adversarial Network (GAN). Through adversarial training, the system reduces the reconstruction loss of user-item interactions and enhances recommendation accuracy. Moreover, multimodal data fusion is utilized to provide richer and more comprehensive information. Experimental results demonstrate superior recommendation performance on the NewsStories and Amazon datasets, achieving optimal results when the number of multi-head attention heads (HEAD) is set to 6 and the hyperparameter is 0.2. Specifically, on the NewsStories dataset, the system achieves a Recall of 21.74% and a Precision of 22.06%; on the Amazon dataset, it achieves a Recall of 18.67% and a Precision of 18.36%. This system effectively captures user preferences, avoids recommending irrelevant content, resolves the limitations of traditional recommender systems, and significantly improves recommendation accuracy.
Author supplied keywords
Cite
CITATION STYLE
Tang, P., Zhu, S., & Alatas, B. (2025). Improving News Recommendation Accuracy Through Multimodal Variational Autoencoder and Adversarial Training. IEEE Access, 13, 85269–85278. https://doi.org/10.1109/ACCESS.2025.3568514
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.