Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User Trust

8Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To promote data transparency, frameworks such as CrowdWorkSheets encourage documentation of annotation practices on the interfaces of AI systems, but we do not know how they affect user experience. Will the quality of labeling affect perceived credibility of training data? Does the source of annotation matter? Will a credible dataset persuade users to trust a system even if it shows racial biases in its predictions? To find out, we conducted a user study (N = 430) with a prototype of a classification system, using a 2 (labeling quality: high vs. low) × 4 (source: others-as-source vs. self-as-source cue vs. self-as-source voluntary action, vs. self-as-source forced action) × 3 (AI performance: none vs. biased vs. unbiased) experiment. We found that high-quality labeling leads to higher perceived training data credibility, which in turn enhances users' trust in AI, but not when the system shows bias. Practical implications for explainable and ethical AI interfaces are discussed.

Cite

CITATION STYLE

APA

Chen, C., & Sundar, S. S. (2023). Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User Trust. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3544548.3580805

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free