Teachable object recognizers provide a solution for a very practical need for blind people-instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.
CITATION STYLE
Hong, J., Gandhi, J., Mensah, E. E., Zeraati, F. Z., Jarjue, E., Lee, K., & Kacorri, H. (2022). Blind Users Accessing Their Training Images in Teachable Object Recognizers. In ASSETS 2022 - Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. Association for Computing Machinery, Inc. https://doi.org/10.1145/3517428.3544824
Mendeley helps you to discover research relevant for your work.