Event cameras (or silicon retinas) represent a new kind of sensor that measure pixel-wise changes in brightness and output asynchronous events accordingly. This novel technology allows for a sparse and energy-efficient recording and storage of visual information. While this type of data is sparse by definition, the event flow can be very high, up to 25M events per second, which requires significant processing resources to handle and therefore impedes embedded applications. Neuromorphic computer vision and event sensor based applications are receiving an increasing interest from the computer vision community (classification, detection, tracking, segmentation, etc.), especially for robotics or autonomous driving scenarios. Downscaling event data is an important feature in a system, especially if embedded, so as to be able to adjust the complexity of data to the available resources such as processing capability and power consumption. To the best of our knowledge, this works is the first attempt to formalize event data downscaling. In order to study the impact of spatial resolution downscaling, we compare several features of the resulting data, such as the total number of events, event density, information entropy, computation time and optical consistency as assessment criteria. Our code is available online at https://github.com/amygruel/EvVisu.
CITATION STYLE
Gruel, A., Martinet, J., Serrano-Gotarredona, T., & Linares-Barranco, B. (2022). Event Data Downscaling for Embedded Computer Vision. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 4, pp. 245–253). Science and Technology Publications, Lda. https://doi.org/10.5220/0010991900003124
Mendeley helps you to discover research relevant for your work.