Long-menu questions in computer-based assessments: A retrospective observational study Assessment and evaluation of admissions, knowledge, skills and attitudes

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: Computer based assessments of paediatrics in our institution use series of clinical cases, where information is progressively delivered to the students in a sequential order. Three types of formats are mainly used: Type A (single answer), Pick N, and Long-menu. Long-menu questions require a long, hidden list of possible answers: based on the student's initial free text response, the program narrows the list, allowing the student to select the answer. This study analyses the psychometric properties of Long-menu questions compared with the two other commonly used formats: Type A and Pick N. Methods: We reviewed the difficulty level and discrimination index of the items in the paediatric exams from 2009 to 2015, and compared the Long-menu questions with the Type A and Pick N questions, using multiple-way analyses of variances. Results: Our dataset included 13 exam sessions with 855 students and 558 items included in the analysis, 212 (38 %) Long-menu, 201 (36 %) Pick N, and 140 Type A (25 %) items. There was a significant format effect associated with both level of difficulty (p =.005) and discrimination index (p

Cite

CITATION STYLE

APA

Cerutti, B., Blondon, K., & Galetto, A. (2016). Long-menu questions in computer-based assessments: A retrospective observational study Assessment and evaluation of admissions, knowledge, skills and attitudes. BMC Medical Education, 16(1). https://doi.org/10.1186/s12909-016-0578-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free