Towards Better User Studies in Computer Graphics and Vision

6Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like “which image is better, A or B?”, leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer.

Cite

CITATION STYLE

APA

Bylinskii, Z., Herman, L., Hertzmann, A., Hutka, S., & Zhang, Y. (2023). Towards Better User Studies in Computer Graphics and Vision. Foundations and Trends in Computer Graphics and Vision, 15(3), 201–252. https://doi.org/10.1561/0600000106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free