SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

6Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Recently, deep neural networks (DNNs) have achieved great success in semantically challenging NLP tasks, yet it remains unclear whether DNN models can capture compositional meanings, those aspects of meaning that have been long studied in formal semantics. To investigate this issue, we propose a Systematic Generalization testbed based on Natural language Semantics (SyGNS), whose challenge is to map natural language sentences to multiple forms of scoped meaning representations, designed to account for various semantic phenomena. Using SyGNS, we test whether neural networks can systematically parse sentences involving novel combinations of logical expressions such as quantifiers and negation. Experiments show that Transformer and GRU models can generalize to unseen combinations of quantifiers, negations, and modifiers that are similar to given training instances in form, but not to the others. We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler. The data and code for SyGNS are publicly available at https://github.com/verypluming/SyGNS.

Cite

CITATION STYLE

APA

Yanaka, H., Mineshima, K., & Inui, K. (2021). SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 103–119). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free