Lattice models for context-driven regularization in motion perception

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Real-world motion field patterns contain intrinsic statistic properties that allow to define Gestalts as groups of pixels sharing the same motion property. By checking the presence of such Gestalts in optic flow fields we can make their interpretation more confident. We propose a context-sensitive recurrent filter capable of evidencing motion Gestalts corresponding to 1st-order elementary flow components (EFCs). A Gestalt emerges from a noisy flow as a solution of an iterative process of spatially interacting nodes that correlates the properties of the visual context with that of a structural model of the Gestalt. By proper specification of the interconnection scheme, the approach can be straightforwardly extended to model any type of multimodal spatio-temporal relationships (i.e., multimodal spatiotemporal context). © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Sabatini, S. P., Solari, F., & Bisio, G. M. (2003). Lattice models for context-driven regularization in motion perception. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2859, 35–42. https://doi.org/10.1007/978-3-540-45216-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free