DeblurSR: Event-Based Motion Deblurring under the Spiking Representation

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We present DeblurSR, a novel motion deblurring approach that converts a blurry image into a sharp video. DeblurSR utilizes event data to compensate for motion ambiguities and exploits the spiking representation to parameterize the sharp output video as a mapping from time to intensity. Our key contribution, the Spiking Representation (SR), is inspired by the neuromorphic principles determining how biological neurons communicate with each other in living organisms. We discuss why the spikes can represent sharp edges and how the spiking parameters are interpreted from the neuromorphic perspective. DeblurSR has higher output quality and requires fewer computing resources than state-of-the-art event-based motion deblurring methods. We additionally show that our approach easily extends to video super-resolution when combined with recent advances in implicit neural representation.

Cite

CITATION STYLE

APA

Song, C., Bajaj, C., & Huang, Q. (2024). DeblurSR: Event-Based Motion Deblurring under the Spiking Representation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 4900–4908). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i5.28293

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free