Large language model augmented exercise retrieval for personalized language learning

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

We study the problem of zero-shot exercise retrieval in the context of online language learning, to give learners the ability to explicitly request personalized exercises via natural language. Using real-world data collected from language learners, we observe that vector similarity approaches poorly capture the relationship between exercise content and the language that learners use to express what they want to learn. This semantic gap between queries and content dramatically reduces the effectiveness of general-purpose retrieval models pretrained on large scale information retrieval datasets like MS MARCO [2]. We leverage the generative capabilities of large language models to bridge the gap by synthesizing hypothetical exercises based on the learner's input, which are then used to search for relevant exercises. Our approach, which we call mHyER, overcomes three challenges: (1) lack of relevance labels for training, (2) unrestricted learner input content, and (3) low semantic similarity between input and retrieval candidates. mHyER outperforms several strong baselines on two novel benchmarks created from crowdsourced data and publicly available data.

Cite

CITATION STYLE

APA

Xu, A., Monroe, W., & Bicknell, K. (2024). Large language model augmented exercise retrieval for personalized language learning. In ACM International Conference Proceeding Series (pp. 284–294). Association for Computing Machinery. https://doi.org/10.1145/3636555.3636883

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free