Understanding movement and interaction: An ontology for kinect-based 3D depth sensors

25Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Microsoft Kinect has attracted great attention from research communities, resulting in numerous interaction and entertainment applications. However, to the best of our knowledge, there does not exist an ontology for 3D depth sensors. Including automated semantic reasoning in these settings would open the doors for new research, making possible not only to track but also understand what the user is doing. We took a first step towards this new paradigm and developed a 3D depth sensor ontology, modelling different features regarding user movement and object interaction. We believe in the potential of integrating semantics into computer vision. As 3D depth sensors and ontology-based applications improve further, the ontology could be used, for instance, for activity recognition, together with semantic maps for supporting visually impaired people or in assistance technologies, such as remote rehabilitation. © Springer International Publishing 2013.

Cite

CITATION STYLE

APA

Díaz Rodríguez, N., Wikström, R., Lilius, J., Pegalajar Cuéllar, M., & Delgado Calvo Flores, M. (2013). Understanding movement and interaction: An ontology for kinect-based 3D depth sensors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8276 LNCS, pp. 254–261). https://doi.org/10.1007/978-3-319-03176-7_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free