Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Inaccessible online data visualizations can significantly disenfranchise screen-reader users from accessing critical online information. Current accessibility measures, such as adding alternative text to visualizations, only provide a high-level overview of data, limiting screen-reader users from exploring data visualizations in depth. In this work, we build on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations granularly through role-based and longitudinal studies with screen-reader users. Utilizing these taxonomies, we extended the functionality of VoxLens-an open-source multi-modal system that improves the accessibility of data visualizations-by supporting drilled-down information extraction. We assessed the performance of our VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Our enhancements "closed the gap"between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.

Cite

CITATION STYLE

APA

Sharif, A., Zhang, A. M., Reinecke, K., & Wobbrock, J. O. (2023). Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users. In ACM International Conference Proceeding Series (pp. 18–31). Association for Computing Machinery. https://doi.org/10.1145/3587281.3587284

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free