Entropy Minimisation Framework for Event-Based Vision Model Estimation

13Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel Entropy Minimisation (EMin) framework for event-based vision model estimation. The framework extends previous event-based motion compensation algorithms to handle models whose outputs have arbitrary dimensions. The main motivation comes from estimating motion from events directly in 3D space (e.g. events augmented with depth), without projecting them onto an image plane. This is achieved by modelling the event alignment according to candidate parameters and minimising the resultant dispersion. We provide a family of suitable entropy loss functions and an efficient approximation whose complexity is only linear with the number of events (e.g. the complexity does not depend on the number of image pixels). The framework is evaluated on several motion estimation problems, including optical flow and rotational motion. As proof of concept, we also test our framework on 6-DOF estimation by performing the optimisation directly in 3D space.

Cite

CITATION STYLE

APA

Nunes, U. M., & Demiris, Y. (2020). Entropy Minimisation Framework for Event-Based Vision Model Estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12350 LNCS, pp. 161–176). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58558-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free