The World Health Organization estimates that there are 285 million people with severe visual impairment worldwide. With the advent of technology, 3D virtual environments are being increasingly used for several applications. Many of these applications, however, are not accessible to visually impaired users, creating a digital divide. Our goal is to develop a novel and low cost 3D interaction technique to allow the identification of virtual objects with autonomy using only proprioception and hearing. We developed a prototype implementing this technique and conducted preliminary tests with it, first with sighted users, only to test the prototype’s basic functionalities and implementation, and then with a blind user with very promising results since the user was able to correctly and quickly identify virtual objects.
CITATION STYLE
de Souza Veriscimo, E., & Bernardes, J. L. (2016). Autonomous identification of virtual 3D objects by visually impaired users with proprioception and audio feedback. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9738, pp. 241–250). Springer Verlag. https://doi.org/10.1007/978-3-319-40244-4_23
Mendeley helps you to discover research relevant for your work.