Designing a virtual environment to evaluate multimodal sensors for assisting the visually impaired

8Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe how to design a virtual environment using Microsoft Robotics Developer Studio in order to evaluate multimodal sensors for assisting visually impaired people in daily tasks such as navigation and orientation. The work focuses on the design of the interfaces of sensors and stimulators in the virtual environment for future subject experimentation. We discuss what type of sensors we have simulated and define some non-classical interfaces to interact with the environment and get feedback from it. We also present preliminary results for feasibility by showing experimental results on volunteer test subjects, concluding with a discussion of potential future directions. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Khoo, W. L., Seidel, E. L., & Zhu, Z. (2012). Designing a virtual environment to evaluate multimodal sensors for assisting the visually impaired. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7383 LNCS, pp. 573–580). https://doi.org/10.1007/978-3-642-31534-3_84

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free