Diagnosing Transformers in Task-Oriented Semantic Parsing

4Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Modern task-oriented semantic parsing approaches typically use seq2seq transformers to map textual utterances to semantic frames comprised of intents and slots. While these models are empirically strong, their specific strengths and weaknesses have largely remained unexplored. In this work, we study BART (Lewis et al., 2020) and XLM-R (Conneau et al., 2020), two state-of-the-art parsers, across both monolingual and multilingual settings. Our experiments yield several key results: transformer-based parsers struggle not only with disambiguating intents/slots, but surprisingly also with producing syntactically-valid frames. Though pre-training imbues transformers with syntactic inductive biases, we find the ambiguity of copying utterance spans into frames often leads to tree invalidity, indicating span extraction is a major bottleneck for current parsers. However, as a silver lining, we show transformer-based parsers give sufficient indicators for whether a frame is likely to be correct or incorrect, making them easier to deploy in production settings.

Cite

CITATION STYLE

APA

Desai, S., & Aly, A. (2021). Diagnosing Transformers in Task-Oriented Semantic Parsing. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 57–62). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free