Many countries use national-level surveys to capture student opinions about their university experiences. It is necessary to interpret survey results in an appropriate context to inform decision-making at many levels. To provide context to national survey outcomes, we describe patterns in the ratings of science and engineering subjects from the UK’s National Student Survey (NSS). New, robust statistical models describe relationships between the Overall Satisfaction’ rating and the preceding 21 core survey questions. Subjects exhibited consistent differences and ratings of “Teaching”, “Organisation” and “Support” were thematic predictors of “Overall Satisfaction” and the best single predictor was “The course was well designed and running smoothly”. General levels of satisfaction with feedback were low, but questions about feedback were ultimately the weakest predictors of “Overall Satisfaction”. The UK’s universities affiliated groupings revealed that more traditional “1994” and “Russell” groups over-performed in a model using the core 21 survey questions to predict “Overall Satisfaction”, in contrast to the under-performing newer universities in the Million+ and Alliance groups. Findings contribute to the debate about “level playing fields” for the interpretation of survey outcomes worldwide in terms of differences between subjects, institutional types and the questionnaire items.
CITATION STYLE
Langan, A. M., Dunleavy, P., & Fielding, A. (2013). Applying models to national surveys of undergraduate science students: What affects ratings of satisfaction? Education Sciences, 3(2), 193–207. https://doi.org/10.3390/educsci3020193
Mendeley helps you to discover research relevant for your work.