Abstract
Collaborative filtering is a widely used method in recommender systems research. However, contrary to the assumption that it relies solely on rating data, many contemporary models incorporate review information to address issues such as data sparsity. Although previous recommender systems utilised review texts to capture user preferences and item features, they often rely on a single-embedding model to represent these features, which may limit the richness of the extracted information. Recent advancements suggest that combining multiple pre-trained embedding models can enhance text representation by leveraging the strengths of different encoding methods. In this study, we propose a novel recommender system model, the Multi-embedding Fusion Network for Recommendation (MFNR), which employs a multi-embedding approach to effectively capture and represent user and item features in review texts. Specifically, the proposed model integrates Bidirectional Encoder Representations from Transformers (BERT) and its optimised variant, RoBERTa, both of which are pre-trained transformer-based models designed for natural language understanding. By leveraging their contextual embeddings, our model extracts enriched feature representations from review texts. Extensive experiments conducted on real-world review datasets from Amazon.com and Goodreads.com demonstrate that MFNR significantly outperforms existing baseline models, achieving an average improvement of 9.18% in RMSE and 14.81% in MAE. These results highlight the efficacy of the multi-embedding approach, indicating its potential for broader application in complex recommendation scenarios.
Author supplied keywords
Cite
CITATION STYLE
Lim, H., Li, Q., Yang, S., & Kim, J. (2025). A BERT-Based Multi-Embedding Fusion Method Using Review Text for Recommendation. Expert Systems, 42(5). https://doi.org/10.1111/exsy.70041
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.