The Robot Won’t Judge Me: How AI Healthcare Benefits the Stigmatized: An Abstract

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The rise of AI healthcare applications is changing the way consumers receive treatment, diagnosis, and health advice. Despite the rapid growth of AI in healthcare contexts, prior literature suggests that consumers experience reservations about AI in healthcare due to the concerns that automation reduces providers’ ability to take into consideration the uniqueness of consumers’ health-related characteristics in comparison to human providers (Longoni et al. 2019) and privacy concerns (Brooks 2019). However, might there be times when consumers might prefer an AI provider over a human healthcare provider? Consumers who suffer stigmatized health issues often experience self-conscious emotions such as fear, shame, and embarrassment. Negative emotions can create barriers for communication between patients and doctors and negative health outcomes. Thus, by developing solutions which reduce consumers’ negative emotions related to stigmatized health issues, consumer well-being may be enhanced. In the current research, we suggest consumers with stigmatized (versus non-stigmatized) health issues will prefer AI to human health care providers. In Study 1, two hundred and forty-two undergraduates were randomly assigned to a 2 (disease type: contagious or non-contagious) × 2 (healthcare provider type: human or AI) between-subjects design. In the heart disease scenario, participants were significantly more likely to schedule a screening appointment if the healthcare provider was a human physician. However, in the seasonal flu scenario, which was perceived as a more stigmatized disease, participants were significantly more likely to schedule a screening appointment if the healthcare provider was a computer. In Study 2, one hundred and fifty four undergraduates were randomly assigned to either a human or AI healthcare provider conditions. All participants were asked to imagine that they had a close friend who was overweight/obese and experienced health issues as a result. Participants were then shown a brochure of a workout program with either a virtual workout program or an in-person one. Results showed that participants expected their obese friend would feel more shame and negative judgement if he/she enrolled in the in-person program compared to the virtual workout program. In summary, findings from two studies provided support for our hypotheses that consumers dealing with stigmatized health conditions prefer artificial intelligence healthcare providers to human ones. These findings have important implications for early diagnosis and potential recovery of these health conditions, as well as to prevent spreading of contagious diseases.

Cite

CITATION STYLE

APA

An, L., & Boman, L. (2022). The Robot Won’t Judge Me: How AI Healthcare Benefits the Stigmatized: An Abstract. In Developments in Marketing Science: Proceedings of the Academy of Marketing Science (pp. 363–364). Springer Nature. https://doi.org/10.1007/978-3-030-95346-1_111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free