Deep Fakes

What are deepfakes? Maybe you’ve heard of them—or maybe, unwittingly, you’ve fallen for one and still haven’t realized that you have. But if you use facial recognition to unlock your phone or log into an app, you’re already familiar with the basic concept behind how deepfakes work.

Deepfakes are videos created by artificial intelligence, which uses “deep learning to make images of fake events.”[1] As of September 2019, there were around 15,000 deepfake videos circulating around the web.[2] These days, the number is assuredly much higher. Most deepfake videos target women in revenge porn videos, created by users by mapping on female faces onto the bodies of pornography stars. Naturally, this isn’t simply a matter of discomfort; this is a matter of personal privacy and safety as well. A deepfake video can jeopardize a career or ruin a life within mere seconds.

It is understandably very dangerous, then, to consider what effect such videos can have when used with celebrity and influencer faces. Companies’ reputations can be damaged, and susceptible individuals may fall prey to unhealthy or even outright vicious propaganda that isn’t truly supported by the individual depicted. Again, this is normally not about a physical harm; this is about brand, reputation, and the control—or complete lack thereof—people have over what is shown, shared, and spread about them online.

States have begun addressing the deepfake phenomenon, but progress is exceptionally slow. Virginia, for example, outlawed the use of deepfakes in revenge porn back in 2022; Texas outlawed the usage in political campaigns. California, meanwhile, has banned both within 60 days of an election, while the Department of Homeland Security has been instructed by the U.S. National Defense Authorization Act to produce “annual reports on threats posed by the technology.”[3] Unfortunately, that seems to be all thus far. There are no laws that specifically deal with instances where children are being harmed within this deepfake sphere.[4]

Part of the problem is the difficulty behind identifying deepfakes. The program creates a deepfake; in response, “the very method a program uses to detect a deepfake can be used to “train” new deepfake creation algorithms.”[5] Even the Deepfake Detection Challenge, which offered $1 million in prize money for whomever was able to create and develop a program deepfake detection capabilities, led to something with only a 65% accuracy rate.[6]

In a political, cultural, and social climate that’s exceedingly online and susceptible to misinformation, deepfakes pose a dangerous and terrifying challenge. On one end, political deepfakes can sway votes. On the other, and relevant to anyone and everyone whose face has been online in a picture, this is a gross invasion of privacy that our laws currently don’t do much to protect us from. Many of the suggestions are further usage of AI or a more careful, considerate look at the source and content of the material, but none of these are foolproof methods—nor are they easily punishable by law.

Perhaps in time, more states will follow in the of outsteps of Virginia, Texas, and California—but more than that, public awareness, attention, and demand may be what provides individuals from the dangers of deepfakes online.


[2] Id.



[5] Id.

[6] Id.

Subscribe to Delete Your Data

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.