Enhancing Educational Dialogues: A Reinforcement Learning Approach for Generating AI Teacher Responses

3Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reinforcement Learning remains an underutilized method of training and fine-tuning Language Models (LMs) despite recent successes. This paper presents a simple approach of fine-tuning a language model with Reinforcement Learning to achieve competitive performance on the BEA 2023 Shared Task whose goal is to automatically generate teacher responses in educational dialogues. We utilized the novel NLPO algorithm that masks out tokens during generation to direct the model towards generations that maximize a reward function. We show results for both the t5-base model with 220 million parameters from the HuggingFace repository submitted to the leaderboard that, despite its comparatively small size, has achieved a good performance on both test and dev set, as well as GPT-2 with 124 million parameters. The presented results show that despite maximizing only one of the metrics used in the evaluation as a reward function our model scores highly in the other metrics as well.

Cite

CITATION STYLE

APA

Huber, T., Niklaus, C., & Handschuh, S. (2023). Enhancing Educational Dialogues: A Reinforcement Learning Approach for Generating AI Teacher Responses. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 736–744). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free