EdgeSonic : Image Feature Sonification for the Visually Impaired Categories and Subject Descriptors

  • Yoshida T
  • Kitani K
  • Belongie S
  • et al.
N/ACitations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

We propose a framework to aid a visually impaired user to recognize objects in an image by sonifying image edge features and distance-to-edge maps. Visually impaired people usually touch objects to recognize their shape. However, it is difficult to recognize objects printed on flat surfaces or objects that can only be viewed from a distance, solely with our haptic senses. Our ultimate goal is to aid a visually impaired user to recognize basic object shapes, by transposing them to aural information. Our proposed method provides two types of image sonification: (1) local edge gradient sonification and (2) sonification of the distance to the closest image edge. Our method was implemented on a touch-panel mobile device, which allows the user to aurally explore image context by sliding his finger across the image on the touch screen. Preliminary experiments show that the combination of local edge gradient sonification and distance-to-edge sonification are effective for understanding basic line drawings. Furthermore, our tests show a significant improvement in image understanding with the introduction of proper user training.

Cite

CITATION STYLE

APA

Yoshida, T., Kitani, K. M., Belongie, S., & Schlei, K. (2011). EdgeSonic : Image Feature Sonification for the Visually Impaired Categories and Subject Descriptors. Image Rochester NY, 1–4. Retrieved from http://dl.acm.org/citation.cfm?id=1959826.1959837

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free