We defined a novel system for wireless videoconferencing in this paper. Compared with normal videoconferencing systems, our approach does not need any visual inputs except a neutral image of the user. Our algorithm automatically calculates user expression features on conference server by corresponding voice audio input. These features are transmitted to end users' mobile sets and final expression synthesis can be done there. Since the large visual data is replaced by a small amount of feature data, a great quantity of data bandwidth can be saved, thus improving communication qualities under wireless conditions. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Jin, C., Bu, J., Chen, C., Song, M., & You, M. (2005). Semi-videoconference system using real-time wireless technologies. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3605 LNCS, pp. 287–293). Springer Verlag. https://doi.org/10.1007/11535409_40
Mendeley helps you to discover research relevant for your work.