Improved cross-lingual question retrieval for community question answering

26Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We perform cross-lingual question retrieval in community question answering (cQA), i.e., we retrieve similar questions for queries that are given in another language. The standard approach to cross-lingual information retrieval, which is to automatically translate the query to the target language and continue with a monolingual retrieval model, typically falls short in cQA due to translation errors. This is even more the case for specialized domains such as in technical cQA, which we explore in this work. To remedy, we propose two extensions to this approach that improve cross-lingual question retrieval: (1) we enhance an NMT model with monolingual cQA data to improve the translation quality, and (2) we improve the robustness of a state-of-the-art neural question retrieval model to common translation errors by adding back-translations during training. Our results show that we achieve substantial improvements over the baseline approach and considerably close the gap to a setup where we have access to an external commercial machine translation service (i.e., Google Translate), which is often not the case in many practical scenarios. Our source code and data is publicly available.

Cite

CITATION STYLE

APA

Rücklé, A., Swarnkar, K., & Gurevych, I. (2019). Improved cross-lingual question retrieval for community question answering. In The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (pp. 3179–3186). Association for Computing Machinery, Inc. https://doi.org/10.1145/3308558.3313502

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free