δLTA: Decoupling Camera Sampling from Processing to Avoid Redundant Computations in the Vision Pipeline

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Continuous Vision (CV) systems are essential for emerging applications like Autonomous Driving (AD) and Augmented/Virtual Reality (AR/VR). A standard CV System-on-a-Chip (SoC) pipeline includes a frontend for image capture and a backend for executing vision algorithms. The frontend typically captures successive similar images with gradual positional and orientational variations. As a result, many regions between consecutive frames yield nearly identical results when processed in the backend. Despite this, current systems process every image region at the camera's sampling rate, overlooking the fact that the actual rate of change in these regions could be significantly lower. In this work, we introduce δLTA (δont't Look Twice, it's Alright), a novel frontend that decouples camera frame sampling from backend processing by extending the camera with the ability to discard redundant image regions before they enter subsequent CV pipeline stages. δLTA informs the backend about the image regions that have notably changed, allowing it to focus solely on processing these distinctive areas and reusing previous results to approximate the outcome for similar ones. As a result, the backend processes each image region using different processing rates based on its temporal variation. δLTA features a new Image Signal Processing (ISP) design providing similarity filtering functionality, seamlessly integrated with other ISP stages to incur zero-latency overhead in the worst-case scenario. It also offers an interface for frontend-backend collaboration to fine-tune similarity filtering based on the application requirements. To illustrate the benefits of this novel approach, we apply it to a state-of-the-art CV localization application, typically employed in AD and AR/VR. We show that δLTA removes a significant fraction of unneeded frontend and backend memory accesses and redundant backend computations, which reduces the application latency by 15.22% and its energy consumption by 17%.

Cite

CITATION STYLE

APA

Taranco, R., Arnau, J. M., & González, A. (2023). δLTA: Decoupling Camera Sampling from Processing to Avoid Redundant Computations in the Vision Pipeline. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2023 (pp. 1029–1043). Association for Computing Machinery, Inc. https://doi.org/10.1145/3613424.3614261

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free