Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI Collaboration

1Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Although recent developments in generative AI have greatly enhanced the capabilities of conversational agents such as Google's Bard or OpenAI's ChatGPT, it's unclear whether the usage of these agents aids users across various contexts. To better understand how access to conversational AI affects productivity and trust, we conducted a mixed-methods, task-based user study, observing 76 software engineers (N=76) as they completed a programming exam with and without access to Bard. Effects on performance, efficiency, satisfaction, and trust vary depending on user expertise, question type (open-ended "solve"questions vs. definitive "search"questions), and measurement type (demonstrated vs. self-reported). Our findings include evidence of automation complacency, increased reliance on the AI over the course of the task, and increased performance for novices on "solve"-type questions when using the AI. We discuss common behaviors, design recommendations, and impact considerations to improve collaborations with conversational AI.

Cite

CITATION STYLE

APA

Qian, C., & Wexler, J. (2024). Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI Collaboration. In ACM International Conference Proceeding Series (pp. 370–384). Association for Computing Machinery. https://doi.org/10.1145/3640543.3645198

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free