Zero-Shot Cross-Lingual Sequence Tagging as Seq2Seq Generation for Joint Intent Classification and Slot Filling

0Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The joint intent classification and slot filling task seeks to detect the intent of an utterance and extract its semantic concepts. In the zero-shot cross-lingual setting, a model is trained on a source language and then transferred to other target languages through multi-lingual representations without additional training data. While prior studies show that pre-trained multilingual sequence-to-sequence (Seq2Seq) models can facilitate zero-shot transfer, there is little understanding on how to design the output template for the joint prediction tasks. In this paper, we examine three aspects of the output template - (1) label mapping, (2) task dependency, and (3) word order. Experiments on the MASSIVE dataset consisting of 51 languages show that our output template significantly improves the performance of pretrained cross-lingual language models.

Cite

CITATION STYLE

APA

Wang, F., Huang, K. H., Kumar, A., Galstyan, A., Steeg, G. V., & Chang, K. W. (2022). Zero-Shot Cross-Lingual Sequence Tagging as Seq2Seq Generation for Joint Intent Classification and Slot Filling. In MMNLU-22 2022 - Massively Multilingual Natural Language Understanding 2022, Proceedings (pp. 53–61). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.mmnlu-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free