Generative artificial intelligence (AI) has gained a great deal of recent attention with the release of Chat GPT 4. It has been praised for its ability to generate human-like responses but has perhaps faced even more criticism over potential concerns for biased responses, misinformation, and generation of harmful or inappropriate content. Chat GPT utilizes large sources of data to curate responses to all kinds of questions. The generative AI models are designed to be objective and avoid any sort of bias in their output. However, in the age of misinformation, social media, user-generated content and the 24-hour news cycle, biased information has never been more plentiful. This paper investigates the possibility of biased responses produced by Chat GPT 4 utilizing public data from biased media sources through Support Vector Machines. We find Chat GPT tends to have bias in its responses.
CITATION STYLE
Duncan, C., & McCulloh, I. (2023). Unmasking Bias in Chat GPT Responses. In Proceedings of the 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2023 (pp. 687–691). Association for Computing Machinery, Inc. https://doi.org/10.1145/3625007.3627484
Mendeley helps you to discover research relevant for your work.