Abstract
In this paper, we present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. Although this is significantly larger than existing FAQ retrieval datasets, it comes with its own challenges: duplication of content and uneven distribution of topics. We adopt a similar setup as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) and test various bi-encoders on this dataset. Our experiments reveal that a multilingual model based on XLM-RoBERTa (Conneau et al., 2019) achieves the best results, except for English. Lower resources languages seem to learn from one another as a multilingual model achieves a higher MRR than language-specific ones. Our qualitative analysis reveals the brittleness of the model on simple word changes. We publicly release our dataset, model and training script.
Cite
CITATION STYLE
De Bruyn, M., Lotfi, E., Buhmann, J., & Daelemans, W. (2021). MFAQ: a Multilingual FAQ Dataset. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, MRQA 2021 (pp. 1–13). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrqa-1.1
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.