Behavior analysis of NLI models: Uncovering the influence of three factors on robustness

33Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.

Abstract

Natural Language Inference is a challenging task that has received substantial attention, and state-of-The-Art models now achieve impressive test set performance in the form of accuracy scores. Here, we go beyond this single evaluation metric to examine robustness to semantically-valid alterations to the input data. We identify three factors-insensitivity, polarity and unseen pairs-and compare their impact on three SNLI models under a variety of conditions. Our results demonstrate a number of strengths and weaknesses in the models' ability to generalise to new in-domain instances. In particular, while strong performance is possible on unseen hypernyms, unseen antonyms are more challenging for all the models. More generally, the models suffer from an insensitivity to certain small but semantically significant alterations, and are also often influenced by simple statistical correlations between words and training labels. Overall, we show that evaluations of NLI models can benefit from studying the influence of factors intrinsic to the models or found in the dataset used.

Cite

CITATION STYLE

APA

Ivan Sanchez Carmona, V., Mitchell, J., & Riedel, S. (2018). Behavior analysis of NLI models: Uncovering the influence of three factors on robustness. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 1975–1985). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1179

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free