Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth

Citations of this article
Mendeley users who have this article in their library.


A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information – not position-in-depth – seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location.




Finlayson, N. J., & Golomb, J. D. (2016). Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Research, 127, 49–56. https://doi.org/10.1016/j.visres.2016.07.003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free