Is Gender-Neutral AI the Correct Solution to Gender Bias? Using Speech-Based Conversational Agents

3Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Background There is an issue that AI agents learn many human behaviors and values, and among them, they also learn the bias of human society. Gender bias, a significant global problem, has penetrated the domain of artificial intelligence (AI). Since AI agents are human digital assistants, it is possible to confirm gender bias in considering several AI agents, such as speech-based conversational agents, as “female.” While gender-neutral AI agents are considered the only solution, there are concerns that they could backfire on human-AI interactions. Therefore, we investigated whether interactions with genderneutral agents are effective when compared to the expectant gender (the gender that users expect) from AI agents. Methods We selected a “speech-based conversational agent” as a research tool that allows users to use it closely in their daily lives and intuitively judge gender. We conducted two study courses. First, we investigate the current gender status of AI agents (speech-based conversational agents). Participants who closely used gender-biased agents confirmed which voice tone and color gender they were expecting. Moreover, we checked what gender the participants expected for each task and performance experience. Second, we tested the usability of agents to which gender-neutral voices were applied. We checked how participants evaluate agents with four versions of neutral voices in terms of preference, stability, and satisfaction. Results The first study confirmed that users perceived speech-based conversational agents as roles to perform simple tasks such as music or weather information retrieval. Moreover, participants consistently expected that a “female” would perform this role well on the side of task and experiences of task performance. The second study confirmed that participants do not prefer the gender-neutral voice of “G” because their identity is challenging to grasp. In addition, participants evaluated that some versions of “G” did not show human-like features. Thus, they did not feel stable. Finally, participants did not feel sufficient satisfaction because they did not prefer all versions of “G” and felt stable in some versions of “G.” Therefore, the participants underestimated the usability of the speech-based conversational genderneutral agent. Conclusions This research shows a great possibility that ignoring the expectant gender and applying gender-neutral will hinder the usability of AI agents. In addition, gender-neutral can instead be a trigger that reminds the user of the expectant gender. Therefore, we suggest that it should not be divided into human gender concepts but rather move toward genderless design that encompasses diversity.

Cite

CITATION STYLE

APA

Yeon, J., Park, Y., & Kim, D. (2023). Is Gender-Neutral AI the Correct Solution to Gender Bias? Using Speech-Based Conversational Agents. Archives of Design Research, 36(2), 63–91. https://doi.org/10.15187/adr.2023.05.36.2.63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free