Human raters’ assessment of interpreting is a complex process. Previous researchers have mainly relied on verbal reports to examine this process. To advance our understanding, we conducted an empirical study, collecting raters’ eye-movement and retrospection data in a computerised interpreting assessment in which three groups of raters (n = 35) used an analytic rubric to assess 12 English-to-Chinese consecutive interpretations. We examined how the raters interacted with the source text, the rating scale, and the audio player displayed on the computer screen when they were assessing. We found that a) the source text and the rating scale were competing for the raters’ visual attention, with the former attracting more attention than the latter across the rater groups; b) when the raters were consulting the rating scale, they fixated less frequently on the sub-scale of target language quality than the other two sub-scales; c) the rater groups did not seem to exhibit substantially discrepant gazing behaviours overall, although there emerged different eye-movement patterns concerning certain sub-scales; and d) the raters utilised an array of strategies and shortcuts to facilitate their assessment. We discuss these findings in relation to rater training and validation of score meaning for interpreting assessment.
CITATION STYLE
Han, C., Zheng, B., Xie, M., & Chen, S. (2024). Raters’ scoring process in assessment of interpreting: an empirical study based on eye tracking and retrospective verbalisation. Interpreter and Translator Trainer, 18(3), 400–422. https://doi.org/10.1080/1750399X.2024.2326400
Mendeley helps you to discover research relevant for your work.