Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search

11Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

BERT-style models pre-trained on the general corpus (e.g., Wikipedia) and fine-tuned on specific task corpus, have recently emerged as breakthrough techniques in many NLP tasks: question answering, text classification, sequence labeling and so on. However, this tech- nique may not always work, especially for two scenarios: a corpus that contains very different text from the general corpus Wikipedia, or a task that learns embedding spacial distribution for a specific purpose (e.g., approximate nearest neighbor search). In this paper, to tackle the above two scenarios that we have encountered in an industrial e-commerce search system, we propose customized and novel pre-training tasks for two critical modules: user intent detec- tion and semantic embedding retrieval. The customized pre-trained models after fine-tuning, being less than 10% of BERT-base's size in order to be feasible for cost-efficient CPU serving, significantly improve the other baseline models: 1) no pre-training model and 2) fine-tuned model from the official pre-trained BERT using general corpus, on both offline datasets and online system. We have open sourced our datasets 1 for the sake of reproducibility and future works.

Cite

CITATION STYLE

APA

Qiu, Y., Zhao, C., Zhang, H., Zhuo, J., Li, T., Zhang, X., … Yang, W. Y. (2022). Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search. In International Conference on Information and Knowledge Management, Proceedings (pp. 4424–4428). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557670

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free