Potential Pitfalls With Automatic Sentiment Analysis: The Example of Queerphobic Bias

17Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Automated sentiment analysis can help efficiently detect trends in patients’ moods, consumer preferences, political attitudes and more. Unfortunately, like many natural language processing techniques, sentiment analysis can show bias against marginalised groups. We illustrate this point by showing how six popular sentiment analysis tools respond to sentences about queer identities, expanding on existing work on gender, ethnicity and disability. We find evidence of bias against several marginalised queer identities, including in the two models from Google and Amazon that seem to have been subject to superficial debiasing. We conclude with guidance on selecting a sentiment analysis tool to minimise the risk of model bias skewing results.

Cite

CITATION STYLE

APA

Ungless, E. L., Ross, B., & Belle, V. (2023). Potential Pitfalls With Automatic Sentiment Analysis: The Example of Queerphobic Bias. Social Science Computer Review, 41(6), 2211–2229. https://doi.org/10.1177/08944393231152946

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free