The traditional machine vision systems use separate architectures for perception, memory, and processing. This approach may hinder the growing demand for high image processing rates and low power consumption. On the other hand, in-sensor computing performs signal processing at the pixel level, directly utilizing collected analogue signals without sending them to other processors. This means that in-sensor computing may offer a solution for achieving highly efficient and low-power consumption visual signal processing. This can be achieved by integrating sensing, storage, and computation onto focal planes with novel circuit designs or new materials. This chapter aims to describe the proposed image processing algorithms and neural networks of in-sensor computing, as well as their applications in machine vision and robotics. The goal of this chapter is to help developers, researchers, and users of unconventional visual sensors understand their functioning and applications, especially in the context of autonomous driving.
CITATION STYLE
Liu, Y., Ni, H., Yuwen, C., Yang, X., Ming, Y., Zhong, H., … Ran, L. (2023). In-Sensor Visual Devices for Perception and Inference. In Advances in Computer Vision and Pattern Recognition (Vol. Part F1566, pp. 1–35). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-4287-9_1
Mendeley helps you to discover research relevant for your work.