Kim Kardashian seemed to mock her fans in this deepfake on Instagram.

When You Can’t Trust Your Eyes

Fake videos are becoming so convincing that it may be hard to tell what’s real. How can we avoid being duped?  

Last spring, a GIF making the rounds on social media showed Joe Biden, then the front-runner for the Democratic nomination for president, sticking out his tongue and wiggling his eyebrows at the camera. President Trump was among the thousands who retweeted it.

But the footage of Biden wasn’t real. High-tech tools were used to manipulate his image and make him look ridiculous. This is a “deepfake,” or an ultrarealistic video that’s been altered with artificial intelligence (A.I.).

Although it was clear to most people that this particular clip had been altered, it’s an example of how deepfakes can be used to spread misinformation. People tend to trust video footage, so the technology could be employed to smear someone, making it look like they did or even said something offensive or inappropriate. That’s why California passed legislation last year making it illegal to create or distribute doctored videos, images, or audio of politicians within 60 days of an election.

“Deepfakes are a powerful and dangerous new technology that can be weaponized to sow misinformation and discord among an already hyperpartisan electorate,” California assemblymember Marc Berman, who introduced the bill, said at the time.

Until recently, making realistic-looking deepfakes was a laborious and expensive process used only by big-budget Hollywood movies or researchers working to develop A.I. There were rudimentary tools available—like the face-swapping effect on Snapchat—but nothing that could deceive most viewers.

But now more powerful online tools are springing up, including apps that make it relatively easy to create realistic face swaps and leave few traces of manipulation. Deepfakes of famous people—including former President Barack Obama, Kim Kardashian, and Facebook CEO Mark Zuckerberg—have already circulated online. The videos make it look like the person is saying something upsetting. Kardashian seemed to be admitting that she uses her fans’ data to make money, for example; but it was a voice actor imitating her while deepfake tools made her mouth and expressions line up with the voice.

It’s still not easy to produce deepfakes that completely fool people. With the Biden GIF, many people could tell his facial movements weren’t natural. But as A.I. develops, deepfake technology is getting better at an alarming pace.

“Deepfakes are eerily dystopian,” says Claire Wardle, executive director of First Draft, a nonprofit that addresses misinformation and disinformation, “and they’re only going to get more realistic and cheaper to make.”

Last spring, a GIF of Joe Biden made the rounds on social media. It showed him sticking out his tongue and wiggling his eyebrows at the camera. At the time, he was the front-runner for the Democratic nomination for president. President Trump was among the thousands who retweeted it.

But the footage of Biden wasn’t real. High-tech tools were used to change his image and make him look silly. This is a “deepfake,” or an ultrarealistic video that’s been altered using artificial intelligence (A.I.).

In this case, it was clear to most people that the clip had been altered. Still, it’s an example of how deepfakes can be used to spread misinformation. People tend to trust video footage. That means the technology could be used to smear someone. That means it could make it look like they did or even said something offensive or inappropriate. Last year, California passed legislation to address this. The state made it illegal to create or share doctored videos, images, or audio of politicians within 60 days of an election.

“Deepfakes are a powerful and dangerous new technology that can be weaponized to sow misinformation and discord among an already hyperpartisan electorate,” California assemblymember Marc Berman, who introduced the bill, said at the time.

Until recently, making realistic-looking deepfakes was a difficult and costly process. That’s why it was only used by big-budget Hollywood movies or researchers working to develop A.I. There were basic tools available, like the face-swapping effect on Snapchat. But there was nothing that could trick most viewers.

But now more powerful online tools are popping up. These include apps that make it relatively easy to create realistic face swaps and leave few traces of any changes. Deepfakes of famous people have already circulated online. That includes former President Barack Obama, Kim Kardashian, and Facebook CEO Mark Zuckerberg. The videos make it look like the person is saying something upsetting. For example, Kardashian seemed to be admitting that she uses her fans’ data to make money. But it was a voice actor imitating her. Deepfake tools made her mouth and expressions line up with the voice.

It’s still not easy to produce deepfakes that completely fool people. With the Biden GIF, many people could tell his facial movements weren’t natural. But as A.I. develops, deepfake technology is getting better at an alarming pace.

“Deepfakes are eerily dystopian,” says Claire Wardle, executive director of First Draft, a nonprofit that addresses misinformation and disinformation, “and they’re only going to get more realistic and cheaper to make.”

Photoshop & Face Swaps

Although deepfakes are a fairly new concern, image manipulation is not. Back in the 1860s, for example, some photos of President Abraham Lincoln—who was often mocked for being ugly and gangly—were subtly tinkered with to make him look better, a long-ago equivalent of using Photoshop to enhance a social media photo. In one famous presidential portrait, the artist superimposed Lincoln’s head onto the body of another politician.

The invention of photo- and video-editing software has made it even easier to obscure the truth. Take the altered image of Republican House member Elise Stefanik, which made it look like she was defiantly extending a middle finger to the camera after listening to impeachment inquiry testimony. (In the original photo, Stefanik was just looking at the camera.) Or consider the clip that appeared to show House Speaker Nancy Pelosi slurring her words, when really, the video had just been slowed down.

Although deepfakes are a fairly new concern, image alteration is not. For example, some photos of President Abraham Lincoln were slightly tinkered with back in the 1860s. He was often mocked for being ugly and lanky. That’s why they changed the photos to make him look better. This process was like an older version of using Photoshop to enhance a social media photo. In one famous presidential portrait, the artist put Lincoln’s head onto the body of another politician.

The invention of photo- and video-editing software has made it even easier to hide the truth. Take the altered image of Republican House member Elise Stefanik. She was made to look like she put her middle finger up to the camera after listening to impeachment inquiry testimony. In the original photo, Stefanik was just looking at the camera. Or consider the clip that appeared to show House Speaker Nancy Pelosi slurring her words. In actuality, the video had just been slowed down.

Fake videos ‘tap into our existing anger or fear.’

These low-tech manipulations—often called “shallowfakes”—are already a big problem, in part because those who create them know how to manipulate people by playing to their biases.

“It’s less about the sophistication of the technology,” Wardle says. “It’s more about how it taps into our existing anger or fear.”

These low-tech manipulations are often called “shallowfakes.” They’re already a big problem. Part of the issue is that those who create them know how to persuade people by playing to their biases.

“It’s less about the sophistication of the technology,” Wardle says. “It’s more about how it taps into our existing anger or fear.”

via Instagram

Mark Zuckerberg, co-founder of Facebook, appeared to talk about the social media platform’s “total control” of users’ data in this doctored video.

’We All Have a Responsibility’

Even though deepfakes aren’t too common yet, just the idea of their existence is spreading confusion. When any video could be fake, it’s easier for people to lie. If, say, a news story reflects poorly on a politician, they might falsely claim that the video evidence had been fabricated.

That’s why experts want people to think critically about all videos they see before they decide whether they’re true and before they share them (see “How to Spot a Deepfake”).

“It’s important to not just assume that any video that you see or any audio that you hear is fake,” says Aviv Ovadya, founder of the Thoughtful Technology Project. “That actually puts you in a place where you know less about the world, because . . . it allows you to pick and choose whatever you want to be true.”

Social media platforms have recently begun contemplating how to address misinformation, and the House of Representatives has held a hearing on how deepfakes threaten national security. But so far, there are no clear answers.

Young people may have a unique role to play in preventing the spread of both deepfakes and shallowfakes, Wardle says, since they’re often more tech-savvy than older family members. She hopes teens will teach their parents about what to watch for.

“As a society, we should be training each other,” she says. “We all have a responsibility to not do this. If we weren’t sharing this stuff, we wouldn’t be in the mess that we’re in.”

Deepfakes aren’t too common yet. Still, just the idea of their existence is spreading confusion. When any video could be fake, it’s easier for people to lie. Take news stories for example. If one reflects poorly on a politician, they might falsely claim that it had fake video evidence in it.

That’s why experts want people to think critically about all videos they see. They hope people will pause and consider whether they’re true before they share them (see “How to Spot a Deepfake”).

“It’s important to not just assume that any video that you see or any audio that you hear is fake,” says Aviv Ovadya, founder of the Thoughtful Technology Project. “That actually puts you in a place where you know less about the world, because . . . it allows you to pick and choose whatever you want to be true.”

Social media platforms have recently begun thinking about how to address misinformation. And the House of Representatives has held a hearing on how deepfakes threaten national security. But so far, there are no clear answers.

Young people may have a unique role to play in preventing the spread of both deepfakes and shallowfakes, Wardle says. That’s because they’re often more tech-savvy than older family members. She hopes teens will teach their parents about what to watch for.

“As a society, we should be training each other,” she says. “We all have a responsibility to not do this. If we weren’t sharing this stuff, we wouldn’t be in the mess that we’re in.” •

With reporting by Kevin Roose of The Times.

How to Spot a Deepfake

If you have a strong emotional reaction to something you see online, it’s time to pause and carefully consider the material.

“If you’re really excited or really angry or really sad, at that moment when you have that reaction, that’s the point to say, ‘Am I absolutely sure that this is true?’ ” says Aviv Ovadya, founder of the Thoughtful Technology Project.

One way to check: Do a Google search to find out if trustworthy outlets are reporting the same thing. If you’re looking at a photo, you may also want to try a reverse image search to see if what you’re looking at is being taken out of context.

If you can’t find any other sources, you may be looking at disinformation.

“If you don’t know 100 percent, hand on heart, this is true, please don’t share,” says Claire Wardle, executive director of First Draft. “Because it’s not worth the risk.”

“If you’re really excited or really angry or really sad, at that moment when you have that reaction, that’s the point to say, ‘Am I absolutely sure that this is true?’ ” says Aviv Ovadya, founder of the Thoughtful Technology Project.

One way to check: Do a Google search to find out if trustworthy outlets are reporting the same thing. If you’re looking at a photo, you may also want to try a reverse image search to see if what you’re looking at is being taken out of context.

If you can’t find any other sources, you may be looking at disinformation.

“If you don’t know 100 percent, hand on heart, this is true, please don’t share,” says Claire Wardle, executive director of First Draft. “Because it’s not worth the risk.”

Skills Sheets (3)
Skills Sheets (3)
Skills Sheets (3)
Text-to-Speech