Tests are an essential tool to assess students' ability. In online education these tests are mostly of static nature with the same items for each student. In contrast computerized adaptive testing concepts take into account the information about the test user automatically collected in an online test. The aim is a comparably precise test result with fewer test items (questions). An implementation of such a computerized adaptive test (CAT) is presented here. The adaptation process is based on the precise knowledge of the item parameters, e.g. difficulty, in the item pool. An estimation of the knowledge level of the test user has to be performed in real time after each answer. With this information the next item can be selected accordingly. This leads to a highly individualized test for each test user. For all items the parameters were determined with methods of the item response theory (IRT) in the framework of the probabilistic test theory. For that real test results of former first year students in engineering science had been analyzed. The prototype of such a CAT has been developed. It focusses on a physics test for prospective students in the STEM fields. In fall 2021 the pilot phase was conducted with first year students in engineering science. The CAT shows that the same precision can be achieved with a mean of 9.3 items compared to 12 in the static test. The acceptance among the students is high. The correlation between the static test and the CAT is satisfactory.
CITATION STYLE
Müller, U. C., Huelmann, T., Haustermann, M., Hamann, F., Bender, E., & Sitzmann, D. (2022). FIRST RESULTS OF COMPUTERIZED ADAPTIVE TESTING FOR AN ONLINE PHYSICS TEST. In SEFI 2022 - 50th Annual Conference of the European Society for Engineering Education, Proceedings (pp. 1377–1387). European Society for Engineering Education (SEFI). https://doi.org/10.5821/conference-9788412322262.1273
Mendeley helps you to discover research relevant for your work.