VideoTRAN: A translation framework for audiovisual face-to-face conversations

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Face-to-face communication remains the most powerful human interaction. Electronic devices can never fully replace the intimacy and immediacy of people conversing in the same room, or at least via a videophone. There are many subtle cues provided by facial expressions and vocal intonation that let us know how what we are saying is affecting the other person. Transmission of these nonverbal cues is very important when translating conversations from a source language into a target language. This chapter describes VideoTRAN, a conceptual framework for translating audiovisual face-to-face conversations. A simple method for audiovisual alignment in the target language is proposed and the process of audiovisual speech synthesis is described. The VideoTRAN framework has been tested in a translating videophone. An H.323 software client translating videophone allows for the transmission and translation of a set of multimodal verbal and nonverbal clues in a multilingual face-to-face communication setting. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Gros, J. Ž. (2007). VideoTRAN: A translation framework for audiovisual face-to-face conversations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4775 LNAI, pp. 219–226). Springer Verlag. https://doi.org/10.1007/978-3-540-76442-7_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free