Vision-based behavior prediction in urban traffic environments by scene categorization

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We propose a method for vision-based scene understanding in urban traffic environments that predicts the appropriate behavior of a human driver in a given visual scene. The method relies on a decomposition of the visual scene into its constituent objects by image segmentation and uses segmentation-based features that represent both their identity and spatial properties. We show how the behavior prediction can be naturally formulated as scene categorization problem and how ground truth behavior data for learning a classifier can be automatically generated from any monocular video sequence recorded from a moving vehicle, using structure from motion techniques. We evaluate our method both quantitatively and qualitatively on the recently proposed CamVid dataset, predicting the appropriate velocity and yaw rate of the car as well as their appropriate change for both day and dusk sequences. In particular, we investigate the impact of the underlying segmentation and the number of behavior classes on the quality of these predictions. © 2010. The copyright of this document resides with its authors.

Cite

CITATION STYLE

APA

Heracles, M., Martinelli, F., & Fritsch, J. (2010). Vision-based behavior prediction in urban traffic environments by scene categorization. In British Machine Vision Conference, BMVC 2010 - Proceedings. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.24.71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free