Abstract
Autonomous systems require a continuous and dependable environment perception for navigation and decision-making, which is best achieved by combining different sensor types. Radar continues to function robustly in compromised circumstances in which cameras become impaired, guaranteeing a steady inflow of information. Yet, camera images provide a more intuitive and readily applicable impression of the world. This work combines the complementary strengths of both sensor types in a unique self-learning fusion approach for a probabilistic scene reconstruction in adverse surrounding conditions. After reducing the memory requirements of both high-dimensional measurements through a decoupled stochastic selfsupervised compression technique, the proposed algorithm exploits similarities and establishes correspondences between both domains at different feature levels during training. Then, at inference time, relying exclusively on radio frequencies, the model successively predicts camera constituents in an autoregressive and self-contained process. These discrete tokens are finally transformed back into an instructive view of the respective surrounding, allowing to visually perceive potential dangers for important tasks downstream.
Author supplied keywords
Cite
CITATION STYLE
Ditzel, C., & Dietmayer, K. (2021). GenRadar: Self-supervised probabilistic camera synthesis based on radar frequencies. IEEE Access, 9, 148994–149042. https://doi.org/10.1109/ACCESS.2021.3120202
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.