Measuring intelligibility of Japanese learner English

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although pursuing accuracy is important in language learning or teaching, knowing what types of errors interfere with communication and what types do not would be more beneficial for efficiently enhancing communicative competence. Language learners could be greatly helped by a system that detected errors in learner language and automatically measured their effect on intelligibility. In this paper, we reported our attempt, based on machine learning, to measure the intelligibility of learner language. In the learning process, the system referred to the BLEU and NIST scores between the learners' original sentences and their back translation (or corrected sentences), the log-probability of the parse, sentence length, and error types (manually or automatically assigned) as a key feature. We found that the system can distinguish between intelligible sentences and others (unnatural and unintelligible) rather successfully, but still has a lot of difficulties in distinguishing the three levels of intelligibility. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Izumi, E., Uchimoto, K., & Isahara, H. (2006). Measuring intelligibility of Japanese learner English. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4139 LNAI, pp. 476–487). Springer Verlag. https://doi.org/10.1007/11816508_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free