Automatic blending of multiple perspective views for aesthetic composition

N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hand-drawn pictures differ from ordinary perspective images in that the entire scene is composed of local feature regions each of which is projected individually as seen from its own vista point. This type of projection, called nonperspective projection, has served as one of the common media for our visual communication while its automatic generation process still needs more research. This paper presents an approach to automatically generating aesthetic nonperspective images by simulating the deformation principles seen in such hand-drawn pictures. The proposed approach first locates the optimal viewpoint for each feature region by maximizing the associated viewpoint entropy value. These optimal viewpoints are then incorporated into the 3D field of camera parameters, which is represented by regular grid samples in the 3D scene space. Finally, the camera parameters are smoothed out in order to eliminate any unexpected discontinuities between neighboring feature regions, by taking advantage of image restoration techniques. Several nonperspective images are generated to demonstrate the applicability of the proposed approach. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Mashio, K., Yoshida, K., Takahashi, S., & Okada, M. (2010). Automatic blending of multiple perspective views for aesthetic composition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6133 LNCS, pp. 220–231). https://doi.org/10.1007/978-3-642-13544-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free