Human object recognition and tracking is important in robotics and automation. The Kinect sensor and its SDK have provided a reliable human tracking solution where a constant line of sight is maintained. However, if the human object is lost from sight during the tracking, the existing method cannot recover and resume tracking the previous object correctly. In this paper, a human recognition method is developed based on colour and depth information that is provided from any RGB-D sensor. In particular, the method firstly introduces a mask based on the depth information of the sensor to segment the shirt from the image (shirt segmentation); it then extracts the colour information of the shirt for recognition (shirt recognition). As the shirt segmentation is only based on depth information, it is light invariant compared to colour-based segmentation methods. The proposed colour recognition method introduces a confidence-based ruling method to classify matches. The proposed shirt segmentation and colour recognition method is tested using a variety of shirts with the tracked human at standstill or moving in varying lighting conditions. Experiments show that the method can recognize shirts of varying colours and patterns robustly. © 2013 Southwell and Fang; licensee InTech.
CITATION STYLE
Southwell, B. J., & Fang, G. (2013). Human Object Recognition Using Colour and Depth Information from an RGB-D Kinect Sensor. International Journal of Advanced Robotic Systems, 10. https://doi.org/10.5772/55717
Mendeley helps you to discover research relevant for your work.