Some claim language models understand us. Others won't hear it. To clarify, I investigate three views of human language understanding: as-mapping, as-reliability and as-representation (§2). I argue that while behavioral reliability is necessary for understanding, internal representations are sufficient; they climb the right hill (§3). I review state-of-the-art language and multi-modal models: they are pragmatically challenged by underspecification of form (§4). I question the Scaling Paradigm: limits on resources may prohibit scaled-up models from approaching understanding (§5). Last, I describe how as-representation advances a science of understanding. We need work which probes model internals, adds more of human language, and measures what models can learn (§6).
CITATION STYLE
Moore, J. (2022). Language Models Understand Us, Poorly. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 214–222). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.16
Mendeley helps you to discover research relevant for your work.