As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.
CITATION STYLE
Supèr, H., & Romeo, A. (2014). Coding depth perception from image defocus. Vision Research, 105, 199–203. https://doi.org/10.1016/j.visres.2014.10.022
Mendeley helps you to discover research relevant for your work.