Attention-Sharing Initiative of Multimodal Processing in Simultaneous Interpreting

  • Li T
  • Fan B
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

This study sets out to describe simultaneous interpreters' attention-sharing initiatives when exposed under input from both videotaped speech recording and real-time transcriptions. Separation of mental energy in acquiring visual input accords with the human brain's statistic optimization principle where the same property of an object is presented through diverse fashions. In examining professional interpreters' initiatives, the authors invited five professional English-Chinese conference interpreters to simultaneously interpret a videotaped speech with real-time captions generated by speech recognition engine while meanwhile monitoring their eye movements. The results indicate the professional interpreters' preferences in referring to visually presented captions along with the speaker's facial expressions, where low-frequency words, proper names, and numbers gained greater attention than words with higher frequency. This phenomenon might be explained by the working memory theory in which the central executive enables redundancy gains retrieved from dual-channel information.

Cite

CITATION STYLE

APA

Li, T., & Fan, B. (2020). Attention-Sharing Initiative of Multimodal Processing in Simultaneous Interpreting. International Journal of Translation, Interpretation, and Applied Linguistics, 2(2), 42–53. https://doi.org/10.4018/ijtial.20200701.oa4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free