Examining visual semantic understanding in blind and low-vision technology users

27Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efciently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual afordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could beneft from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.

Cite

CITATION STYLE

APA

Potluri, V., Grindeland, T. E., Froehlich, J. E., & Mankoff, J. (2021). Examining visual semantic understanding in blind and low-vision technology users. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445040

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free