Preliminary experiments on crowdsourced evaluation of feedback granularity

1Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.

Abstract

Providing writing feedback to English language learners (ELLs) helps them learn to write better, but it is not clear what type or how much information should be provided. There have been few experiments directly comparing the effects of different types of automatically generated feedback on ELL writing. Such studies are difficult to conduct because they require participation and commitment from actual students and their teachers, over extended periods of time, and in real classroom settings. In order to avoid such difficulties, we instead conduct a crowdsourced study on Amazon Mechanical Turk to answer questions concerning the effects of type and amount of writing feedback. We find that our experiment has several serious limitations but still yields some interesting results.

Cite

CITATION STYLE

APA

Madnani, N., Chodorow, M., Cahill, A., Lopez, M., Futagi, Y., & Attali, Y. (2015). Preliminary experiments on crowdsourced evaluation of feedback granularity. In 10th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2015 at the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2015 (pp. 162–171). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/w15-0619

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free