A survey of methods for revealing and overcoming weaknesses of data-driven Natural Language Understanding

7Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Recent years have seen a growing number of publications that analyse Natural Language Understanding (NLU) datasets for superficial cues, whether they undermine the complexity of the tasks underlying those datasets and how they impact those models that are optimised and evaluated on this data. This structured survey provides an overview of the evolving research area by categorising reported weaknesses in models and datasets and the methods proposed to reveal and alleviate those weaknesses for the English language. We summarise and discuss the findings and conclude with a set of recommendations for possible future research directions. We hope that it will be a useful resource for researchers who propose new datasets to assess the suitability and quality of their data to evaluate various phenomena of interest, as well as those who propose novel NLU approaches, to further understand the implications of their improvements with respect to their model's acquired capabilities.

Cite

CITATION STYLE

APA

Schlegel, V., Nenadic, G., & Batista-Navarro, R. (2023, January 22). A survey of methods for revealing and overcoming weaknesses of data-driven Natural Language Understanding. Natural Language Engineering. Cambridge University Press. https://doi.org/10.1017/S1351324922000171

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free