Simple data augmentation with the MASK token improves domain adaptation for dialog act tagging

3Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

The concept of Dialogue Act (DA) is universal across different task-oriented dialogue domains - the act of “request” carries the same speaker intention whether it is for restaurant reservation or flight booking. However, DA taggers trained on one domain do not generalize well to other domains, which leaves us with the expensive need for a large amount of annotated data in the target domain. In this work, we investigate how to better adapt DA taggers to desired target domains with only unlabeled data. We propose MASKAUGMENT, a controllable mechanism that augments text input by leveraging the pre-trained MASK token from BERT model. Inspired by consistency regularization, we use MASKAUGMENT to introduce an unsupervised teacher-student learning scheme to examine the domain adaptation of DA taggers. Our extensive experiments on the Simulated Dialogue (GSim) and Schema-Guided Dialogue (SGD) datasets show that MASKAUGMENT is useful in improving the cross-domain generalization for DA tagging.

Cite

CITATION STYLE

APA

Yavuz, S., Hashimoto, K., Liu, W., Keskar, N. S., Socher, R., & Xiong, C. (2020). Simple data augmentation with the MASK token improves domain adaptation for dialog act tagging. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 5083–5089). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.412

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free