Artificial Illusions

Deepfakes as Speech

Authors

  • Nicolas Graber-Mitchell Amherst College

Abstract

Deepfakes, a new type of artificial media created by sophisticated machine learning algorithms, present a fundamental epistemological problem to society: How can we know the truth when seeing and hearing are not believing? This paper discusses how deepfakes fit into the category of illusory speech, what they do in society, and how to deal with them. Illusions present an alternate reality, much like a lie, but they also contain evidence for that reality. Some illusions, like games of tag and magic tricks, are harmless and fun. Others, like counterfeit coins and deepfakes, harm others and are deeply convincing. For example, the most common use for deepfake technology is to produce pornographic videos of women who never consented. After strangers attacked them in this way, women reported feeling violated and living in a state of constant “visceral fear.” Pornographic deepfakes — most often deployed against women — abridge their targets’ sexual agency and privacy, contributing to inequality and enabling intimate partner abuse, workplace sexual harassment, and other discrimination and hate. Deepfakes also pose a threat in politics and society more generally. In addition to allowing malicious actors to produce convincing, illusory disinformation, their increased use may lead to a general inability to discern the truth. In the face of the deep and distressing harms that deepfakers cause to women and the danger that they present to democracy, this paper argues for new civil and criminal penalties for deepfakers as well as new regulations and liabilities for internet platforms that host their work.

Downloads

Published

2021-06-23

Issue

Section

Comments