Effective Approaches to Neural Query Language Identification

1Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Query language identification (Q-LID) plays a crucial role in a cross-lingual search engine. There exist two main challenges in Q-LID: (1) insufficient contextual information in queries for disambiguation; and (2) the lack of query-style training examples for low-resource languages. In this article, we propose a neural Q-LID model by alleviating the above problems from both model architecture and data augmentation perspectives. Concretely, we build our model upon the advanced TRANSFORMER model. In order to enhance the discrimination of queries, a variety of external features (e.g., character, word, as well as script) are fed into the model and fused by a multi-scale attention mechanism. Moreover, to remedy the low resource challenge in this task, a novel machine translation–based strategy is proposed to automatically generate synthetic query-style data for low-resource languages. We contribute the first Q-LID test set called QID-21, which consists of search queries in 21 languages. Experimental results reveal that our model yields better classification accuracy than strong baselines and existing LID systems on both query and traditional LID tasks.1.

Cite

CITATION STYLE

APA

Ren, X., Yang, B., Liu, D., Zhang, H., Lv, X., Yao, L., & Xie, J. (2022). Effective Approaches to Neural Query Language Identification. Computational Linguistics, 48(4), 887–906. https://doi.org/10.1162/coli_a_00451

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free