Scene parsing and fusion-based continuous traversable region formation

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Determining the categories of different parts of a scene and generating a continuous traversable region map in the physical coordinate system are crucial for autonomous vehicle navigation. This paper presents our efforts in these two aspects for an autonomous vehicle operating in open terrain environment. Driven by the ideas that have been proposed in our Cognitive Architecture, we have designed novel strategies for the top-down facilitation process to explicitly interpret spatial relationship between objects in the scene, and have incorporated a visual attention mechanism into the image-based scene parsing module. The scene parsing module is able to process images fast enough for real-time vehicle navigation applications. To alleviate the challenges in using sparse 3D occupancy grids for path planning, we are proposing an approach to interpolate the category of occupancy grids not hit by 3D LIDAR, with reference to the aligned image-based scene parsing result, so that a continuous 2½D traversable region map can be formed.

Cite

CITATION STYLE

APA

Xiao, X., Ng, G. W., Tan, Y. S., & Chuan, Y. Y. (2015). Scene parsing and fusion-based continuous traversable region formation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9008, pp. 383–398). Springer Verlag. https://doi.org/10.1007/978-3-319-16628-5_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free