This paper describes a new model for extracting large-field optical flow patterns to generate distributed representations of neural activation to control complex visual tasks such as 3D egomotion. The neural mechanisms draw upon experimental findings about the response properties and specificities of cells in areas V1, MT and MSTd along the dorsal pathway. Model Vl cells detect local motion estimates. Model MT cells in different pools are suggested to be selective to motion patterns integrating from V1 as well as to velocity gradients. Model MSTd cells considered here integrate MT gradient cells over a much larger spatial neighborhood to generate the observed pattern selectivity for expansion/contraction, rotation and spiral motion, providing the necessary input for spatial navigation mechanisms. Our model also incorporates feedback processing between areas V1-MT and MT-MSTd. We demonstrate that such a re-entry of context-related information helps to disambiguate and stabilize more localized processing along the primary motion pathway. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Ringbauer, S., Bayerl, P., & Neumann, H. (2007). Neural mechanisms for mid-level optical flow pattern detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4669 LNCS, pp. 281–290). Springer Verlag. https://doi.org/10.1007/978-3-540-74695-9_29
Mendeley helps you to discover research relevant for your work.