Towards Scalable Vocabulary Acquisition Assessment with BERT

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

In this investigation we propose new machine learning methods for automated scoring models that predict the vocabulary acquisition in science and social studies of second grade English language learners, based upon free-form spoken responses. We evaluate performance on an existing dataset and use transfer learning from a large pre-trained language model, reporting the influence of various objective function designs and the input-convex network design. In particular, we find that combining objective functions with varying properties, such as distance among scores, greatly improves the model reliability compared to human raters. Our models extend the current state of the art performance for assessing word definition tasks and sentence usage tasks in science and social studies, achieving excellent quadratic weighted kappa scores compared with human raters. However, human-human agreement still surpasses model-human agreement, leaving room for future improvement. Even so, our work highlights the scalability of automated vocabulary assessment of free-form spoken language tasks in early grades.

Cite

CITATION STYLE

APA

Wu, Z., Larson, E., Sano, M., Baker, D., Gage, N., & Kamata, A. (2023). Towards Scalable Vocabulary Acquisition Assessment with BERT. In L@S 2023 - Proceedings of the 10th ACM Conference on Learning @ Scale (pp. 272–276). Association for Computing Machinery, Inc. https://doi.org/10.1145/3573051.3596170

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free