Detecting humans in 2D thermal images by generating 3D models

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are two significant challenges to standard approaches to detect humans through computer vision. First, scenarios when the poses and postures of the humans are completely unpredictable. Second, situations when there are many occlusions, i.e., only parts of the body are visible. Here a novel approach to perception is presented where a complete 3D scene model is learned on the fly to represent a 2D snapshot. In doing so, an evolutionary algorithm generates pieces of 3D code that are rendered and the resulting images are compared to the current camera picture via an image similarity function. Based on the feedback of this fitness function, a crude but very fast online evolution generates an approximate 3D model of the environment where non-human objects are represented by boxes. The key point is that 3D models of humans are available as code sniplets to the EA, which can use them to represent human shapes or portions of them if they are in the image. Results from experiments with real world data from a search and rescue application using a thermal camera are presented. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Markov, S., & Birk, A. (2007). Detecting humans in 2D thermal images by generating 3D models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4667 LNAI, pp. 293–307). Springer Verlag. https://doi.org/10.1007/978-3-540-74565-5_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free