ViDRILO: The visual and depth robot indoor localization with objects information dataset

31Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.

Cite

CITATION STYLE

APA

Martínez-Gómez, J., García-Varea, I., Cazorla, M., & Morell, V. (2015). ViDRILO: The visual and depth robot indoor localization with objects information dataset. International Journal of Robotics Research, 34(14), 1681–1687. https://doi.org/10.1177/0278364915596058

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free