A Multi-task Learning Framework for Product Ranking with BERT

18Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Product ranking is a crucial component for many e-commerce services. One of the major challenges in product search is the vocabulary mismatch between query and products, which may be a larger vocabulary gap problem compared to other information retrieval domains. While there is a growing collection of neural learning to match methods aimed specifically at overcoming this issue, they do not leverage the recent advances of large language models for product search. On the other hand, product ranking often deals with multiple types of engagement signals such as clicks, add-to-cart, and purchases, while most of the existing works are focused on optimizing one single metric such as click-through rate, which may suffer from data sparsity. In this work, we propose a novel end-to-end multi-task learning framework for product ranking with BERT to address the above challenges. The proposed model utilizes domain-specific BERT with fine-tuning to bridge the vocabulary gap and employs multi-task learning to optimize multiple objectives simultaneously, which yields a general end-to-end learning framework for product search. We conduct a set of comprehensive experiments on a real-world e-commerce dataset and demonstrate significant improvement of the proposed approach over the state-of-the-art baseline methods.

Cite

CITATION STYLE

APA

Wu, X., Magnani, A., Chaidaroon, S., Puthenputhussery, A., Liao, C., & Fang, Y. (2022). A Multi-task Learning Framework for Product Ranking with BERT. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 493–501). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3511977

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free