What the heck is a deepfake?

Can you really believe what you see?

This is the question that the machine-learning fueled phenomenon of “deepfakes” requires us to consider more seriously than ever before. What is a deepfake and why do they matter? Learn more below about these new cyberthreats and their implications to cybersecurity and to society. 

Introduction to Deepfakes

A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name). There two overviews of how deepfakes work in this article: one for the layperson, and one for the technically-minded. 

Simple Explanation: 

Deep learning is similar to any kind of machine learning, where an algorithm is fed examples and learns to produce output that resembles the examples it learned from. Humans learn the same way; a baby might try eating random objects, and it quickly discovers what’s edible and what isn’t. 

As an analogy to machine learning and deepfakes, the objects laying around the house would be analogous to real images on the internet, and the baby’s ability to recognize an object as edible or inedible without trying to put it in their mouth after a couple of months is analogous to the algorithm’s ability to produce fake images that resemble real ones after training on the existing data. 

Technical explanation (skip this section if you’re not interested in the nitty-gritty): 

Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network, which is designed to replicate the way a human brain learns information. A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is. Neural networks, and particularly recursive neural networks (RNNs), are known for performing quite well on image recognition tasks, and so applying them to creating deepfakes is a no-brainer (no pun intended). 

The process of producing complex deepfakes actually involves two algorithms. One algorithm is trained to produce the best fake replicas possible of real images. The other model is trained to detect when an image is fake and when it’s not. The two models iterate back and forth, each getting better at their respective task. By pitting models against each other, you end up with a model that’s extremely adept at producing fake images; so adept, in fact, that humans often can’t tell that the output is a fake at all. 

How is a deepfake different from photoshop or faceswap? 

Fake or doctored images show up all over the internet these days, and are often harmless. You’re probably familiar with the amusing effects of “face swapping” on snapchat or other photo apps, where you can put someone else’s face on your own and vice versa. Or maybe you participated in the “age yourself” trend, and ran your face through a fake aging app that showed you what you might look like in your ripe old age. 

Aside from the fact that these applications of photo-altering technology are designed for amusement, they’re mostly harmless because it’s easy to tell that the images are fake and don’t actually reflect reality. That’s precisely what makes deepfakes dangerous -- the application of deep learning to make false image production creates a world where humans often can’t tell that images or videos are fake at all. 

So what’s the big deal?

In today’s society, the vast majority of people get their information about the world and formulate opinions based on content from the internet. Therefore, anyone with the capability to create deepfakes can release misinformation and influence the masses to behave in a way that will advance the faker’s personal agenda in some way. Deepfake-based misinformation could wreak havoc on a micro and macro scale. 

On a small scale, deepfakers can, for example, create personalized videos that appear to show a relative asking for a large sum of money to help them out of an emergency and send them to unsuspecting victims, thereby scamming innocents at an unprecedented level. 

On a large scale, fake videos of important world leaders stating made-up claims could incite violence and even war. 

What can we do? 

As of now, deepfakes aren’t a huge problem, but they’ll likely increase in prevalence and quality over the next few years. That doesn’t mean you can’t trust any image or video, but you should begin to train yourself to be more aware of fake images and videos, especially when the videos are asking you to send money or personal information, or making outrageous claims that seem unusual for the person who appears to be making them. 

Interestingly, AI may be the answer to detecting deep fakes. Models can be trained to recognize fake images on dimensions that the human eye can’t detect. Keep a watchful eye on the development of the deepfake phenomenon over the next couple of years, and, as always, remain vigilant.