Abstract
Despite neural language models qualitatively capturing many human linguistic behaviors, recent work has demonstrated that they underestimate the true processing costs of ungrammatical structures. We extend these more finegrained comparisons between humans and models by investigating the interaction between Principle B and coreference processing. While humans use Principle B to block certain structural positions from affecting their incremental processing, we find that GPT-based language models are influenced by ungrammatical positions. We conclude by relating the mismatch between neural models and humans to properties of training data and suggest that certain aspects of human processing behavior do not directly follow from linguistic data.
Cite
CITATION STYLE
Davis, F. (2022). Incremental Processing of Principle B: Mismatches between Neural Models and Humans. In CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 144–156). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.conll-1.11
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.