Abstract
Many research papers pull data from student surveys. But are those surveys well designed? Are the questions used validated? Are the results comparable across studies? What exactly are we asking our students? In this work, we performed a systematic literature map of the past 15 years of papers in the three main conferences sponsored by the ACM Special Interest Group on Computer Science Education: International Computing Education Research (ICER), Innovation and Technology in Computer Science Education (ITiCSE), and the Special Interest Group on Computer Science Education Technical Symposium (SIGCSE). We search for all papers referring to student surveys or questionnaires. Out of 1313 papers analyzed, 42 papers referred to surveys containing general questions applicable to many or all computer science students. Our analysis showed that many papers were using surveys to extract similar types of information, such as demographics, prior experience or motivation to study computer science. However, the questions were being asked in different ways, using different scales, thus making it difficult or impossible to compare survey results between studies. We further found that while some studies based their questions on well-validated surveys, or at least shared their questions for possible later validation, approximately half of the papers found neither validated their questions, nor shared them to allow for post-hoc validation.
Author supplied keywords
Cite
CITATION STYLE
Zavaleta Bernuy, A., & Harrington, B. (2020). What are We Asking our Students? A Literature Map of Student Surveys in Computer Science Education. In Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE (pp. 418–424). Association for Computing Machinery. https://doi.org/10.1145/3341525.3387383
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.