Abstract
Robust and accurate trajectory estimation of mobile agents such as people and robots is a key requirement for providing spatial awareness for emerging capabilities such as augmented reality or autonomous interaction. Although currently dominated by optical techniques e.g., visual-inertial odometry these suffer from challenges with scene illumination or featureless surfaces. As an alternative, we propose milliEgo, a novel deep-learning approach to robust egomotion estimation which exploits the capabilities of low-cost mm Wave radar. Although mmWave radar has a fundamental advantage over monocular cameras of being metric i.e., providing absolute scale or depth, current single chip solutions have limited and sparse imaging resolution, making existing point-cloud registration techniques brittle. We propose a new architecture that is optimized for solving this challenging pose transformation problem. Secondly, to robustly fuse mmWave pose estimates with additional sensors, e.g. inertial or visual sensors we introduce a mixed attention approach to deep fusion. Through extensive experiments, we demonstrate our proposed system is able to achieve 1.3% 3D error drift and generalizes well to unseen environments. We also show that the neural architecture can be made highly efficient and suitable for real-time embedded applications.
Author supplied keywords
Cite
CITATION STYLE
Lu, C. X., Saputra, M. R. U., Zhao, P., Almalioglu, Y., De Gusmao, P. P. B., Chen, C., … Markham, A. (2020). MilliEgo: Single-chip mmWave radar aided egomotion estimation via deep sensor fusion. In SenSys 2020 - Proceedings of the 2020 18th ACM Conference on Embedded Networked Sensor Systems (pp. 109–122). Association for Computing Machinery, Inc. https://doi.org/10.1145/3384419.3430776
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.