Autonomous robots in real world applications have to deal with a complex 3D environment, but are often equipped with standard 2D laser range finders (LRF) only. By using the 2D LRF for both, the 2D localization and mapping (which can be done efficiently and precisely) and for the 3D obstacle detection (which makes the robot move safely), a completely autonomous robot can be built with affordable 2D LRFs. We use the 2D LRF to perform particle filter based SLAM to generate a 2D occupancy grid, and the same LRF (moved by two servo motors) to acquire 3D scans to detect obstacles not visible in the 2D scans. The 3D data is analyzed with a recursive principal component analysis (PCA) based method, and the detected obstacles are recorded in a separate obstacle map. This obstacle map and the occupancy map are merged for the path planning. Our solution was tested on our mobile system Robbie during the RoboCup Rescue competitions in 2008 and 2009, winning the mapping challenge at the world championship 2008 and the German Open in 2009. This shows that the benefit of a sensor can dramatically be increased by actively controlling it, and that mixed 2D/3D perception can efficiently be achieved with a standard 2D sensor by controlling it actively. © 2011 Springer-Verlag.
CITATION STYLE
Pellenz, J., Neuhaus, F., Dillenberger, D., Gossow, D., & Paulus, D. (2011). Mixed 2D/3D perception for autonomous robots in unstructured environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6556 LNAI, pp. 303–313). https://doi.org/10.1007/978-3-642-20217-9_26
Mendeley helps you to discover research relevant for your work.