As drones and autonomous cars become more widespread, it is becoming increasingly important that robots can operate safely under realistic conditions. The noisy information fed into real systems means that robots must use estimates of the environment to plan navigation. Efficiently guaranteeing that the resulting motion plans are safe under these circumstances has proved difficult. We examine how to guarantee that a trajectory or policy has at most ϵ collision probability (ϵ-safe) with only imperfect observations of the environment. We examine the implications of various mathematical formalisms of safety and arrive at a mathematical notion of safety of a long-term execution, even when conditioned on observational information. We explore the idea of shadows that generalize the notion of a confidence set to estimated shapes and present a theorem that allows us to understand the relationship between shadows and their classical statistical equivalents such as confidence and credible sets. We present efficient algorithms that use shadows to prove that trajectories or policies are safe with much tighter bounds than in previous work. Notably, the complexity of the environment does not affect our method’s ability to evaluate whether a trajectory or policy is safe. We then use these safety-checking methods to design a safe variant of the rapidly exploring random tree (RRT) planning algorithm.
CITATION STYLE
Axelrod, B., Kaelbling, L. P., & Lozano-Pérez, T. (2018). Provably safe robot navigation with obstacle uncertainty. International Journal of Robotics Research, 37(13–14), 1760–1774. https://doi.org/10.1177/0278364918778338
Mendeley helps you to discover research relevant for your work.