Multimodal representation of complex spatial data

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

For blind users, spatial information is often presented in non-spatial form such as electronic speech. We explore the possibility of representing spatial data on refreshable tactile graphic displays in combination with audio feedback and utilizing both static and dynamic tactile information. We describe an implementation of a New York Times style crossword puzzle, providing interactions to query location and stored data, ask for clues in across and down directions, edit and fill in the crossword puzzle using a Perkins style braille keyboard or a typewriter style keyboard, and verify answers. Through our demonstration, we explore tradeoffs related to available tactile real estate, and overcrowding of the tactile image with a view toward reducing the cognitive workload involved in retaining a working mental model of the active grid, and the time to complete a letter placement task.

Cite

CITATION STYLE

APA

Rao, H., & O’Modhrain, S. (2019). Multimodal representation of complex spatial data. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3290607.3313249

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free