Probing linguistic systematicity

44Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

Abstract

Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity-generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models often generalize non-systematically. We examined the notion of systematicity from a linguistic perspective, defining a set of probes and a set of metrics to measure systematic behaviour. We also identified ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying. As a case study, we performed a series of experiments in the setting of natural language inference (NLI), demonstrating that some NLU systems achieve high overall performance despite being non-systematic.

Cite

CITATION STYLE

APA

Goodwin, E., Sinha, K., & O’Donnell, T. J. (2020). Probing linguistic systematicity. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1958–1969). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.177

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free