Individual Deep Fake Recognition Skills are Affected by Viewer’s Political Orientation, Agreement with Content and Device Used

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

AI-generated “deep fakes” is increasingly used by cybercriminals conducting targeted and tailored social engineering attacks, and for influencing public opinion. To raise awareness and efficiently train individuals in recognizing deep fakes, understanding individual differences in the ability to recognize them is central. Previous research suggested a close relationship between political attitudes and top-down perceptual and cognitive processing styles. In this study, we investigate the impact of political attitudes and agreement with the political message content on individual deep fake recognition skills. 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message content. Half of the presented videos were fabricated via lip-sync technology. In addition to agreement with each statement made, global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS). There were robust negative associations between participants’ general and social conservatism and their ability to recognize fabricated videos, especially when where there was agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized compared to when watched on stationary computers. This is the first study to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study supports recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research are discussed.

Cite

CITATION STYLE

APA

Sütterlin, S., Ask, T. F., Mägerle, S., Glöckler, S., Wolf, L., Schray, J., … Lugo, R. G. (2023). Individual Deep Fake Recognition Skills are Affected by Viewer’s Political Orientation, Agreement with Content and Device Used. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14019 LNAI, pp. 269–284). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-35017-7_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free