Pose-invariant object recognition for event-based vision with slow-ELM

0Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power consuming imagers which encode visual change information in the form of spikes help reduce computational overhead and realize complex real-time systems; object recognition and pose-estimation to name a few. However, there exists a lack of algorithms in event-based vision aimed towards capturing invariance to transformations. In this work, we propose a methodology for recognizing objects invariant to their pose with the Dynamic Vision Sensor (DVS). A novel slow-ELM architecture is proposed which combines the effectiveness of Extreme Learning Machines and Slow Feature Analysis. The system, tested on an Intel Core i5-4590 CPU, can perform 10, 000 classifications per second and achieves 1% classification error for 8 objects with views accumulated over 90° of 2D pose.

Cite

CITATION STYLE

APA

Ghosh, R., Siyi, T., Rasouli, M., Thakor, N. V., & Kukreja, S. L. (2016). Pose-invariant object recognition for event-based vision with slow-ELM. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9887 LNCS, pp. 455–462). Springer Verlag. https://doi.org/10.1007/978-3-319-44781-0_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free