Vision-aided inertial navigation for precise planetary landing: Analysis and experiments

1Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present the analysis and experimentalvalidation of a vision-aided inertial navigation algorithmfor planetary landing applications. The system employs tightintegration of inertial and visual feature measurements to computeaccurate estimates of the lander's terrain-relative position,attitude, and velocity in real time. Two types of features areconsidered: mapped landmarks, i.e., features whose global 3D positionscan be determined from a surface map, and opportunisticfeatures, i.e., features that can be tracked in consecutive images,but whose 3D positions are not known. Both types of features areprocessed in an extended Kalman filter (EKF) estimator and areoptimally fused with measurements from an inertial measurementunit (IMU). Results from a sounding rocket test, covering thedynamic profile of typical planetary landing scenarios, showestimation errors of magnitude 0.16 m/s in velocity and 6.4 min position at touchdown. These results vastly improve currentstate of the art for non-vision based EDL navigation, and meetthe requirements of future planetary exploration missions.

Cite

CITATION STYLE

APA

Mourikis, A. I., Trawny, N., Roumeliotis, S. I., Johnson, A., & Matthies, L. (2008). Vision-aided inertial navigation for precise planetary landing: Analysis and experiments. In Robotics: Science and Systems (Vol. 3, pp. 145–152). Massachusetts Institute of Technology. https://doi.org/10.15607/rss.2007.iii.019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free