Evaluating Transformer Models and Human Behaviors on Chinese Character Naming

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural network models have been proposed to explain the grapheme-phoneme mapping pro¬cess in humans for many alphabet languages. These models not only successfully learned the correspondence of the letter strings and their pronunciation, but also captured human behav¬ior in nonce word naming tasks. How would the neural models perform for a non-alphabet language (e.g., Chinese) unknown character task? How well would the model capture hu¬man behavior? In this study, we first collect human speakers’ answers on unknown Char¬acter naming tasks and then evaluate a set of transformer models by comparing their performance with human behaviors on an un¬known Chinese character naming task. We found that the models and humans behaved very similarly, that they had similar accuracy distribution for each character, and had a sub¬stantial overlap in answers. In addition, the models’ answers are highly correlated with humans’ answers. These results suggested that the transformer models can capture humans’ character naming behavior well.

Cite

CITATION STYLE

APA

Ma, X., & Gao, L. (2023). Evaluating Transformer Models and Human Behaviors on Chinese Character Naming. Transactions of the Association for Computational Linguistics, 11, 755–770. https://doi.org/10.1162/tacl_a_00573

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free