Causal interventions expose implicit situation models for commonsense language understanding

5Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Accounts of human language processing have long appealed to implicit “situation models” that enrich comprehension with relevant but unstated world knowledge. Here, we apply causal intervention techniques to recent transformer models to analyze performance on the Winograd Schema Challenge (WSC), where a single context cue shifts interpretation of an ambiguous pronoun. We identify a relatively small circuit of attention heads that are responsible for propagating information from the context word that guides which of the candidate noun phrases the pronoun ultimately attends to. We then compare how this circuit behaves in a closely matched “syntactic” control where the situation model is not strictly necessary. These analyses suggest distinct pathways through which implicit situation models are constructed to guide pronoun resolution.

References Powered by Scopus

Situation Models in Language Comprehension and Memory

1888Citations
N/AReaders
Get full text

Constructing Inferences During Narrative Text Comprehension

1804Citations
N/AReaders
Get full text

Scripts in memory for text

1180Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Causal Abstraction for Chain-of-Thought Reasoning in Arithmetic Word Problems

4Citations
N/AReaders
Get full text

CausalGym: Benchmarking causal interpretability methods on linguistic tasks

1Citations
N/AReaders
Get full text

Understanding how LLMs complete a classical NLP task by gradient accumulation-based circuit discovery

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yamakoshi, T., McClelland, J. L., Goldberg, A. E., & Hawkins, R. D. (2023). Causal interventions expose implicit situation models for commonsense language understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 13265–13293). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.839

Readers over time

‘23‘24‘2502468

Readers' Seniority

Tooltip

Professor / Associate Prof. 4

36%

PhD / Post grad / Masters / Doc 3

27%

Lecturer / Post doc 2

18%

Researcher 2

18%

Readers' Discipline

Tooltip

Computer Science 7

58%

Psychology 2

17%

Linguistics 2

17%

Philosophy 1

8%

Save time finding and organizing research with Mendeley

Sign up for free
0