In this article, we investigate a systematic workflow that supports the learning engineering process of formulating the starting question for a conversational module based on existing learning materials, specifying the input that transformer-based language models need to function as classifiers, and specifying the adaptive dialogue structure, i.e., the turns the classifiers can choose between. Our primary purpose is to evaluate the effectiveness of conversational modules if a learning engineer follows our workflow. Notably, our workflow is technically lightweight, in the sense that no further training of the models is expected. To evaluate the workflow, we created three different conversational modules. For each, we assessed classifier quality and how coherent the follow-up question asked by the agent was based on the classification results of the user response. The classifiers reached F1-macro scores between 0.66 and 0.86, and the percentage of coherent follow-up questions asked by the agent was between 79% and 84%. These results highlight, first, the potential of transformer-based models to support learning engineers in developing dedicated conversational agents. Second, it highlights the necessity to consider the quality of the adaptation mechanism together with the adaptive dialogue. As such models continue to be improved, their benefits for learning engineering will rise. Future work would be valuable to investigate the usability of this workflow by learning engineers with different backgrounds and prior knowledge on the technical and pedagogical aspects of learning engineering.
CITATION STYLE
Mirzababaei, B., & Pammer-Schindler, V. (2024). Facilitating the Learning Engineering Process for Educational Conversational Modules Using Transformer-Based Language Models. IEEE Transactions on Learning Technologies, 17, 1222–1235. https://doi.org/10.1109/TLT.2024.3367738
Mendeley helps you to discover research relevant for your work.