This work investigates the problem of robust vision-based human tracking by a human following robot using point-based features like SURF. The problem is challenging owing to failures arising because of variation in illumination, change in pose, size or scale, camera motion and partial or full occlusion. While point-based features provide robust detection against photometric and geometric distortions, the tracking of these features over subsequent frames becomes difficult as the number of matching points between a pair of images drops quickly with slight variation in target attribute owing to above mentioned variations. The problem of robust human tracking by the robot is solved by proposing a multi-tracker fusion framework that allows one to combine multiple tracker to ensure long term tracking of the target. This fusion framework also allows for creating a dynamic template pool of target features which gets updated over time. The interaction between the first two trackers is used to update the template pool of the target attribute while the last tracker is used to estimate the location of the target in case of full occlusion. The working of the framework is demonstrated by combining a SURF-based mean-shift tracker, an optical-flow tracker and a Kalman filter to provide robust tracking over a long time. The efficacy of the resulting tracker is demonstrated through rigorous testing on a variety of video datasets.
Gupta, M., Kumar, S., Behera, L., & Subramanyam, V. K. (2017). A novel fusion framework for robust human tracking by a service robot. Robotics and Autonomous Systems, 94, 134–147. https://doi.org/10.1016/j.robot.2017.05.001