What do language models (LMs) do with language? They can produce sequences of (mostly) coherent strings closely resembling English. But do those sentences mean something, or are LMs simply babbling in a convincing simulacrum of language use? We address one aspect of this broad question: whether LMs’ words can refer, that is, achieve “word-to-world” connections. There is prima facie reason to think they do not, since LMs do not interact with the world in the way that ordinary language users do. Drawing on the externalist tradition in philosophy of language, we argue that those appearances are misleading: Even if the inputs to LMs are simply strings of text, they are strings of text with natural histories, and that may suffice for LMs’ words to refer.
CITATION STYLE
Mandelkern, M., & Linzen, T. (2024). Do Language Models’ Words Refer? Computational Linguistics, 1–10. https://doi.org/10.1162/coli_a_00522
Mendeley helps you to discover research relevant for your work.