Detecting and tracking people using 2D laser rangefinders (LRFs) is challenging due to the features of the human leg motion, high levels of self-occlusion and the existence of objects which are similar to the human legs. Previous approaches use datasets that are manually labelled with support of images of the scenes. We propose a system with a calibrated monocular camera and 2D LRF mounted on a mobile robot in order to generate a dataset of leg patterns through automatic labelling which is valid to achieve a robust and efficient 2D LRF-based people detector and tracker. First, both images and 2D laser data are recorded during the robot navigation in indoor environments. Second, the people detection boxes and keypoints obtained by a deep learning-based object detector are used to locate both people and their legs on the images. The coordinates frame of 2D laser is extrinsically calibrated to the camera coordinates allowing our system to automatically label the leg instances. The automatically labelled dataset is then used to achieve a leg detector by machine learning techniques. To validate the proposal, the leg detector is used to develop a Kalman filter-based people detection and tracking algorithm which is experimentally assessed. The experimentation shows that the proposed system overcomes the Angus Leigh’s detector and tracker which is considered the state of the art on 2D LRF-based people detector and tracker.
CITATION STYLE
Aguirre, E., & García-Silvente, M. (2023). Detecting and tracking using 2D laser range finders and deep learning. Neural Computing and Applications, 35(1), 415–428. https://doi.org/10.1007/s00521-022-07765-6
Mendeley helps you to discover research relevant for your work.