Multilingual semantic parsing and code-switching

33Citations
Citations of this article
96Readers
Mendeley users who have this article in their library.

Abstract

Extending semantic parsing systems to new domains and languages is a highly expensive, time-consuming process, so making effective use of existing resources is critical. In this paper, we describe a transfer learning method using crosslingual word embeddings in a sequence-to-sequence model. On the NLmaps corpus, our approach achieves state-of-the-art accuracy of 85.7% for English. Most importantly, we observed a consistent improvement for German compared with several baseline domain adaptation techniques. As a by-product of this approach, our models that are trained on a combination of English and German utterances perform reasonably well on code-switching utterances which contain a mixture of English and German, even though the training data does not contain any code-switching. As far as we know, this is the first study of code-switching in semantic parsing. We manually constructed the set of code-switching test utterances for the NLmaps corpus and achieve 78.3% accuracy on this dataset.

Cite

CITATION STYLE

APA

Duong, L., Afshar, H., Estival, D., Pink, G., Cohen, P., & Johnson, M. (2017). Multilingual semantic parsing and code-switching. In CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings (pp. 379–389). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k17-1038

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free