It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to "read"the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are "deepfakes": a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around "fake news."Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women's faces into pornographic videos. The implication here is a reification of women's bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used to perfect deepfakes were men. This paper explores how the emergence and distribution of deepfakes continues to enforce gendered disparities within visual information. This paper, however, rejects the inevitability of deepfakes arguing that feminist oriented approaches to artificial intelligence building and a critical approaches to visual information literacy can stifle the distribution of violently sexist deepfakes.
CITATION STYLE
Wagner, T. L., & Blewer, A. (2019, January 1). “the Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video. Open Information Science. Walter de Gruyter GmbH. https://doi.org/10.1515/opis-2019-0003
Mendeley helps you to discover research relevant for your work.