3d representation of videoconference image sequences using VRML 2.0

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper a procedure for visualisation of videoconference image sequences using Virtual Reality Modeling Language (VRML) 2.0 is described. First image sequence analysis is performed in order to estimate the shape and motion parameters of the person talking in front of the camera. For this purpose, we propose the K-Means with connectivity constraint algorithm as a general segmentation algorithm combining information of various types such as colour and motion. The algorithm is applied "hierarchically" in the image sequence and it is first used to separate the background from the foreground object and then to further segment the foreground object into the head and shoulders regions. Based on the above information, personal 3D shape parameters are estimated. The rigid 3D motion is estimated next for each sub-object. Finally a VRML file is created containing all the above estimated information.

Cite

CITATION STYLE

APA

Kompatsiaris, I., & Strintzis, M. G. (1998). 3d representation of videoconference image sequences using VRML 2.0. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1425, pp. 1–12). Springer Verlag. https://doi.org/10.1007/3-540-64594-2_81

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free