Iterative feature mining for constraint-based data collection to increase data diversity and model robustness

17Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Diverse data is crucial for training robust models, but crowdsourced text often lacks diversity as workers tend to write simple variations from prompts. We propose a general approach for guiding workers to write more diverse text by iteratively constraining their writing. We show how prior workflows are special cases of our approach, and present a way to apply the approach to dialog tasks such as intent classification and slot-filling. Using our method, we create more challenging versions of test sets from prior dialog datasets and find dramatic performance drops for standard models. Finally, we show that our approach is complementary to recent work on improving data diversity, and training on data collected with our approach leads to more robust models.

Cite

CITATION STYLE

APA

Larson, S., Zheng, A., Mahendran, A., Tekriwal, R., Cheung, A., Guldan, E., … Kummerfeld, J. K. (2020). Iterative feature mining for constraint-based data collection to increase data diversity and model robustness. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 8097–8106). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.650

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free