A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like bert, gpt-3, and ChatGPT

4Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an ai system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock's concept of intelligence, Taylor's conception of intrinsic rightness as well as Wittgenstein's rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.

Cite

CITATION STYLE

APA

Gubelmann, R. (2023). A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like bert, gpt-3, and ChatGPT. Grazer Philosophische Studien, 99(4), 485–523. https://doi.org/10.1163/18756735-00000182

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free