Using 3-dimensional meshes to combine image-based and geometry-based constraints

18Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To recover complicated surfaces, single information sources often prove insufficient. In this paper, we present a unified framework for 3-D shape reconstruction that allows us to combine image-based constraints, such as those deriving from stereo and shape-from-shading, with geometry-based ones, provided here in the form of 3-D points, 3-D features or 2-D silhouettes. Our approach to shape recovery is to deform a generic object-centered 3-D representation of the surface so as to minimize an objective function. This objective function is a weighted sum of the contributions of the various information sources. We describe these various terms individually, our weighting scheme and our optimization method. Finally, we present results on a number of difficult images of real scenes for which a single source of information would have proved insufficient.

Cite

CITATION STYLE

APA

Fua, P., & Leclerc, Y. G. (1994). Using 3-dimensional meshes to combine image-based and geometry-based constraints. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 801 LNCS, pp. 281–291). Springer Verlag. https://doi.org/10.1007/bfb0028361

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free