Questions are one of the most essential assessment components in education. Although the delivery of questions has been revolutionized by technologies like intelligent tutoring systems (ITS), questions generation (QG) still largely relies on expert knowledge. QG requires instructors to address multifaceted aspects of teaching, including student performance, learning goals, the coverage of concepts/topics, and so on. To the best of our knowledge, there is little research investigating the structural characteristics of instructor-made questions (for specific students in a class), textbook questions (for a broader range of readers in general), and their relationship with student performance in practice. This work used the local knowledge graph (LKG) to analyze structural features of the instructor-made multiple-choice questions and those from textbooks. The results showed that the instructor-made questions were much less complex than the textbook questions in terms of concept diversity. Also, the complexity of the network components involved in the questions was significantly correlated with the performance in a classification analysis.
CITATION STYLE
Chung, C. Y., & Hsiao, I. H. (2022). Semantic Modeling of Programming Practices with Local Knowledge Graphs: The Effects of Question Complexity on Student Performance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13356 LNCS, pp. 245–249). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-11647-6_44
Mendeley helps you to discover research relevant for your work.