Representing 3D Shape and Location

  • Glennerster A
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

3D shape may be best understood in terms of the 2D image changes that occur when an observer moves with respect to a surface rather than supposing the visual system relies on a 3D coordinate frame. The same may be true of object location. In fact, a view-based representation applicable to all the images visible from a many vantage points (a ‘universal primal sketch’) may be a better way to describe the visual system’s stored knowledge about surface shape and object location than object-, head-, body- or world-centered 3D representations. This chapter describes a hierarchical encoding of image features based on the MIRAGE algorithm (Watt in J. Opt. Soc. Am. A 4:2006–2021, 1987) and discusses how this could be extended to survive head movements. Psychophysical findings are discussed that appear paradoxical if the brain generates a consistent 3D representation of surfaces or object location whereas they are simple to explain if the visual system only computes relevant information once the task is defined. The minimum requirements for a useful visual representation of 3D shape and location do not include internal consistency.

Cite

CITATION STYLE

APA

Glennerster, A. (2013). Representing 3D Shape and Location. In Shape Perception in Human and Computer Vision (pp. 201–212). Springer London. https://doi.org/10.1007/978-1-4471-5195-1_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free