Automated scoring of clinical expressive language evaluation tasks

4Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.

Cite

CITATION STYLE

APA

Wang, Y., Prud’Hommeaux, E., Asgari, M., & Dolata, J. (2020). Automated scoring of clinical expressive language evaluation tasks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 177–185). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.bea-1.18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free