Image labeling tools help to extract objects within images to be used as ground truth for learning and testing in object detection processes. The inputs for such tools are usually RGB images. However with new widely available low-cost sensors like Microsoft Kinect it is possible to use depth images in addition to RGB images. Despite many existing powerful tools for image labeling, there is a need for RGB-depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentation from RGB to RGB-depth using Fuzzy C-Means clustering, connected component labeling and superpixels, and generates bounding pixels to extract the desired objects. Second, it minimizes the interaction time needed for object extraction by doing an efficient segmentation in RGB-depth space. Very few clicks are needed for the entire procedure compared to existing, tools. When the desired object is the closest object to the camera, which is often the case in robotics applications, no clicks at all are required to accurately extract the object.
CITATION STYLE
Pordel, M., & Hellström, T. (2015). Semi-automatic image labelling using depth information. Computers, 4(2), 142–154. https://doi.org/10.3390/computers4020142
Mendeley helps you to discover research relevant for your work.