Quality-assessment models for live interlingual subtitling are virtually non-existent. In this study we investigate whether and to what extent existing models from related translation modes, more specifically the Named Entity Recognition (NER) model for intralingual live subtitling, provide a good starting point. Having conducted a survey of the major quality parameters in different forms of subtitling, we proceed to adapt this model. The model measures live intralingual quality on the basis of different types of recognition error by the speech-recognition software, and edition errors by the respeaker, with reference to their impact on the viewer’s comprehension. To test the adapted model we conducted a context-based study comprising the observation of the live interlingual subtitling process of four episodes of Dansdate, broadcast by the Flemish commercial broadcaster VTM in 2015. The process observed involved four “subtitlers”: the respeaker/interpreter, a corrector, a speech-to-text interpreter and a broadcaster, all of whom performed different functions. The data collected allow errors in the final product and in the intermediate stages to be identified: they include when and by whom they were made. The results show that the NER model can be applied to live interlingual subtitling if it is adapted to deal with errors specific to translation proper.
CITATION STYLE
Robert, I. S., & Remael, A. (2017). Assessing quality in live interlingual subtitling: A new challenge. Linguistica Antverpiensia, New Series – Themes in Translation Studies, 16, 168–195. https://doi.org/10.52034/lanstts.v16i0.454
Mendeley helps you to discover research relevant for your work.