With recent developments, the performance of automotive radar has improved significantly. The next generation of 4D radar can achieve imaging capability in the form of high-resolution point clouds. In this context, we believe that the era of deep learning for radar perception has arrived. However, studies on radar deep learning are spread across different tasks, and a holistic overview is lacking. This review paper attempts to provide a big picture of the deep radar perception stack, including signal processing, datasets, labelling, data augmentation, and downstream tasks such as depth and velocity estimation, object detection, and sensor fusion. For these tasks, we focus on explaining how the network structure is adapted to radar domain knowledge. In particular, we summarise three overlooked challenges in deep radar perception, including multi-path effects, uncertainty problems, and adverse weather effects, and present some attempts to solve them.
CITATION STYLE
Zhou, Y., Liu, L., Zhao, H., López-Benítez, M., Yu, L., & Yue, Y. (2022, June 1). Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges. Sensors. MDPI. https://doi.org/10.3390/s22114208
Mendeley helps you to discover research relevant for your work.