Generated by A.I.:Tools such as Midjourney and DALL-E use A.I. to scrape millions of images from the open web and create “original” images. What clues in these images of a moon landing, a giant creature, and former British Prime Minister Boris Johnson might help you determine they’re fake? Jordan Rhone via The New York Times (moon); Handout via The New York Times (Bigfoot); Eliot Higgins via The New York Times (Johnson, hand inset)

The End of Truth?

Advances in artificial intelligence have made it much trickier to tell what’s real and what’s fake

When Jennifer DeStefano’s phone rang with a call from an unknown number, she nearly didn’t pick up. But the Arizona mom’s 15-year-old daughter, Briana, was away on a ski trip, so she decided to answer it just in case.

What DeStefano heard on the other line sent her into a full-blown panic. Briana sobbed and pleaded for help—then a man came on the line and asked for $1 million in ransom for Briana’s release. 

Yet within minutes of the January 2023 call, DeStefano confirmed that Briana was, in fact, perfectly safe. DeStefano had been the victim of an artificial intelligence (A.I.) kidnapping scam. Using a short voice audio clip likely pulled from Briana’s social media and with the help of A.I. software, scammers had cloned Briana’s voice to try to trick her family into believing it was her. 

Jennifer DeStefano’s 15-year-old daughter, Briana, was away on a ski trip. Her phone rang with a call from an unknown number. The Arizona mom almost didn’t answer the call.  But she decided to pick it up just in case.

What DeStefano heard on the other line made her panic. Briana cried and begged for help. Then a man came on the phone and asked for $1 million to release Briana. 

DeStefano was able to confirm that Briana was safe within minutes of the January 2023 call. She had been the victim of an artificial intelligence (A.I.) kidnapping scam. The scammers used a short voice audio clip likely pulled from Briana’s social media and A.I. software to clone her voice. Then they tried to trick her family into believing it was her. 

‘There will come a day where nothing you see on the internet can be believed.’

A.I. has been all the buzz since the 2022 release of OpenAI’s chatbot, ChatGPT. People everywhere are talking about the potential A.I. holds to transform society—both for better and for worse. The A.I. industry is worth more than $100 billion and is expected to grow twentyfold by 2030, according to a study by Next Move Strategy Consulting.

But as the budding technology propagates throughout our everyday lives, it’s raising a host of concerns about the rapid spread of misinformation. In some cases, A.I. is falling into the hands of bad actors. In others, it’s simply spewing false information with an air of confidence that can fool humans into believing that what they’re seeing and hearing is actually true.

Some experts worry the proliferation of A.I. spells the end of truth as we know it. 

“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” says Wasim Khaled, CEO of Blackbird.AI, a company that helps clients fight disinformation.

A.I. has been all the buzz since the 2022 release of OpenAI’s chatbot, ChatGPT. People everywhere are talking about the potential A.I. holds to change society. The changes may be for better and for worse. The A.I. industry is worth more than $100 billion. It is expected to grow twentyfold by 2030, according to a study by Next Move Strategy Consulting.

But as the new technology spreads throughout our everyday lives, it’s raising many concerns about the rapid spread of misinformation. In some cases, A.I. is falling into the hands of bad actors. In other cases, it’s simply spreading false information that seems true. It can fool humans into believing that what they’re seeing and hearing is actually true.

Some experts worry the spread of A.I. may be the end of truth as we know it. 

“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” says Wasim Khaled, CEO of Blackbird.AI, a company that helps clients fight disinformation.

Ukrainian Presidential Press Office via AP Images (Zelensky); Shutterstock.com (face mapping)

Deepfake Videos: Powerful online tools that rely on A.I. can be used to manipulate the faces of public figures—like President Volodymyr Zelensky of Ukraine—and create fake video footage of them.

Going Viral

Generative A.I. is a type of artificial intelligence that identifies patterns in existing content and data and creates new and often unique material from it. With a simple prompt or description, you can create new text, imagery, video, audio, and more from scratch—all in a matter of seconds.

People across industries have been using the technology to make their jobs easier, from helping computer programmers fix bugs in their code to assisting marketers in writing advertising scripts. But what happens when A.I. becomes so good at mimicking the voice, speech, and likeness of humans that we simply can’t tell what’s real and what’s not?

Last spring, fake A.I.-generated Kanye and Drake songs went viral on TikTok, and many believed they were listening to the real thing. Fake images of an earthquake devastating the Pacific Northwest and of Pope Francis in a puffy designer jacket were also all over the internet. A year earlier, a deepfake video of Ukrainian President Volodymyr Zelensky surrendering to Russia made the rounds on social media.

Generative A.I. is a type of artificial intelligence that identifies patterns in existing content and data. It uses those patterns to create new and often unique material. With a simple prompt or description, you can create new text, imagery, video, audio, and more from scratch. It can take a matter of seconds.

People across industries have been using the technology to make their jobs easier. It can help computer programmers fix bugs in their code or assist marketers in writing advertising scripts. But what happens when A.I. becomes so good at copying the voice, speech, and likeness of humans that we simply can’t tell what’s real and what’s not?

Last spring, fake A.I.-generated Kanye and Drake songs went viral on TikTok. Many people believed they were listening to the real thing. Fake images of an earthquake destroying the Pacific Northwest and of Pope Francis in a puffy designer jacket were all over the internet. A year earlier, a deepfake video of Ukrainian President Volodymyr Zelensky surrendering to Russia was seen on social media.

Kevin Winter/Getty Images (Drake); Mike Coppola/Getty Images (Johansson); Steve Granitz/WireImage (Jay-Z); Jon Kopaloff/Getty Images (Gomez)

Audio: Using A.I., people can clone the voices of musicians and actors to create new songs or audio in their likenesses. Drake, Scarlett Johansson, Jay-Z, and Selena Gomez have all been victims of A.I. voice cloning.

In October, Google unveiled a new phone that lets users edit images and sounds, including facial expressions, right on their photos. Think you look grumpy in a beach snap? Just edit in a smile from another shot.

As A.I. manipulation gets easier and easier, political experts expect deepfakes and other disinformation to play a big role in this year’s presidential election. In fact, it’s already made an appearance on the campaign trail. Last spring, then-Republican candidate Ron DeSantis’s campaign shared a video on X, formerly known as Twitter, that included images of former President Donald Trump hugging Anthony Fauci. Fauci, an immunologist, was a top adviser to Trump on the Covid-19 pandemic, and the two clashed multiple times publicly. Although they may have looked real upon first glance, the images were A.I.-generated.

In October, Google unveiled a new phone that lets users edit images and sounds. A user can edit facial expressions right on their photos. Think you look grumpy in a beach snap? Just edit in a smile from another shot.

As A.I. manipulation gets easier and easier, political experts expect deepfakes and other disinformation to play a big role in this year’s presidential election. In fact, it’s already made an appearance on the campaign trail. Last spring, then-Republican candidate Ron DeSantis’s campaign shared a video on X, formerly known as Twitter, that included images of former President Donald Trump hugging Anthony Fauci. Fauci, an immunologist, was a top adviser to Trump on the Covid-19 pandemic. The two disagreed multiple times publicly. Although they may have looked real upon first glance, the images were A.I.-generated.

Quick conversation

We asked ChatGPT a few questions. Here’s how the A.I. chatbot responded.  

Upfront: Can you introduce yourself?

ChatGPT: I’m like a supersmart computer program that can talk to you in writing.

Upfront: What can’t you do?

ChatGPT: I have limitations and constraints despite being a powerful language model. Here are some things I cannot do:

1. I do not have personal experience, emotions, or consciousness.
2. My knowledge is only up to date until September 2021.
3. I do not possess creativity or the ability to generate original ideas.
4. I cannot make decisions for you.

Upfront: What’s the weirdest fact you know?

ChatGPT: There are many weird and fascinating facts out there, but one that some people find particularly strange is: “Bananas are berries, but strawberries aren’t.”

Upfront: Can you introduce yourself?

ChatGPT: I’m like a supersmart computer program that can talk to you in writing.

Upfront: What can’t you do?

ChatGPT: I have limitations and constraints despite being a powerful language model. Here are some things I cannot do:

1. I do not have personal experience, emotions, or consciousness.
2. My knowledge is only up to date until September 2021.
3. I do not possess creativity or the ability to generate original ideas.
4. I cannot make decisions for you.

Upfront: What’s the weirdest fact you know?

ChatGPT: There are many weird and fascinating facts out there, but one that some people find particularly strange is: “Bananas are berries, but strawberries aren’t.”

Bad Bots

Generative A.I. technology can also learn unwanted and unexpected behaviors. Because it delivers information with what seems like complete confidence, it can be a struggle to separate truth from fiction while using these systems.

When two New York lawyers used ChatGPT to write a legal brief for a court hearing last June, it included citations of six totally made-up cases. Last year, another A.I. bot, under development by Meta, responded to the question “Who is a terrorist?” by giving the name of the Dutch politician Marietje Shaake, who is not a terrorist. And everyday users find the chatbot is incorrect about little things all the time.

Meanwhile, a wave of uncensored and loosely moderated chatbots have come online without many guardrails—setting off debate over whether chatbots should be moderated and who should decide.

“These chatbots can and will say anything if left to their own devices,” says Oren Etzioni, an emeritus professor at the University of Washington and former chief executive of the Allen Institute for AI. “They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”

Generative A.I. technology can also learn unwanted and unexpected behaviors. It then delivers the information with what seems like complete confidence. This can make it hard to separate truth from fiction while using these systems.

When two New York lawyers used ChatGPT to write a legal brief for a court hearing last June, it included citations of six totally made-up cases. Last year, another A.I. bot, under development by Meta, responded to the question “Who is a terrorist?” by giving the name of the Dutch politician Marietje Shaake, who is not a terrorist. And everyday users find that chatbots are incorrect about little things all the time.

Meanwhile, a wave of uncensored and loosely moderated chatbots have come online without many controls. A debate has started over whether chatbots should be moderated and who should decide.

“These chatbots can and will say anything if left to their own devices,” says Oren Etzioni, an emeritus professor at the University of Washington and former chief executive of the Allen Institute for AI. “They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”

A New Frontier

People are coming together to call for safeguards against A.I. Last March, more than 1,000 technology leaders and researchers working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

In October, President Biden issued an executive order outlining the federal government’s first regulations on A.I. systems. They include requirements that the most advanced A.I. products be tested to assure that they can’t be used to produce biological or nuclear weapons, as well as recommendations that images, video, and audio developed by such systems be watermarked to make clear that they were created by A.I.

“Deepfakes use A.I.-generated audio and video to smear reputations, spread fake news, and commit fraud,” Biden said at the signing of the order at the White House. He described his concern that fraudsters could take three seconds of a person’s voice and manipulate its content, turning an innocent comment into something more sinister that would quickly go viral.

People are coming together to call for safeguards against A.I. Last March, more than 1,000 technology leaders and researchers working in and around artificial intelligence signed an open letter. The letter warned that A.I. technologies present “profound risks to society and humanity.”

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

In October, President Biden issued an executive order outlining the federal government’s first regulations on A.I. systems. They include requirements that the most advanced A.I. products be tested to assure that they can’t be used to produce biological or nuclear weapons. They also recommended that images, video, and audio developed by such systems be watermarked to make clear that they were created by A.I.

“Deepfakes use A.I.-generated audio and video to smear reputations, spread fake news, and commit fraud,” Biden said at the signing of the order at the White House. He described his concern that fraudsters could take three seconds of a person’s voice and manipulate its content. The technology could turn an innocent comment into something more disturbing that would quickly go viral.

A.I. is already making an appearance on the campaign trail.

The tech industry also supports regulations, though the companies disagree on the level of government oversight required. Microsoft, OpenAI, Google, and Meta are among 15 companies that have agreed to voluntary safety and security commitments, including having third parties stress-test their systems for vulnerabilities.

Despite the progress in securing guardrails, developments in A.I. seem to be moving faster than lawmakers in Washington can keep up—with the technology, in many cases, taking on a mind of its own. Some experts say we still need to do more.

“We are now living in a time where all of the things your faculties used to understand were real . . . could be A.I. generated,” says Khaled of Blackbird.AI. “That has to be something that people always remember now—you have to question all the things you see and hear.”

The tech industry also supports regulations. They disagree on the level of government oversight required. Microsoft, OpenAI, Google, and Meta are among 15 companies that have agreed to voluntary safety and security commitments. This would include having third parties stress-test their systems for weaknesses.

Despite the progress in securing guidelines, developments in A.I. seem to be moving faster than lawmakers in Washington can keep up. The technology, in many cases, seems to be taking on a mind of its own. Some experts say we still need to do more.

“We are now living in a time where all of the things your faculties used to understand were real . . . could be A.I. generated,” says Khaled of Blackbird.AI. “That has to be something that people always remember now—you have to question all the things you see and hear.”

With reporting by Cade Metz, Stuart A. Thompson, Tiffany Hsu, and Steven Lee Meyers of The New York Times.

With reporting by Cade Metz, Stuart A. Thompson, Tiffany Hsu, and Steven Lee Meyers of The New York Times.

Andrés Guadamuz via The New York Times

Seeing Is Believing?

Here are some ways to tell if something you see or hear is real—or whether it could be A.I. generated. 

IMAGES: Look very closely at the image. If you notice gibberish lettering on a sign in the background, extra fingers on a person’s hand, or a scene that just doesn’t seem plausible, it might have been generated by A.I.

TEXT: Do your research. Find at least two reputable sources that confirm what you’re reading is a fact.

VIDEO: Look at how things are aligned. If images look blurry or if someone’s face or neck meets their body in an unusual way, things could be amiss. If the video’s audio and images don’t line up, that could be a sign of a deepfake too.

AUDIO: If a person’s voice on the other line sounds robotic or glitchy, it could be a deepfake. Listen to the background sound as well—extra static might be a sign that something’s not right.

IMAGES: Look very closely at the image. If you notice gibberish lettering on a sign in the background, extra fingers on a person’s hand, or a scene that just doesn’t seem plausible, it might have been generated by A.I.

TEXT: Do your research. Find at least two reputable sources that confirm what you’re reading is a fact.

VIDEO: Look at how things are aligned. If images look blurry or if someone’s face or neck meets their body in an unusual way, things could be amiss. If the video’s audio and images don’t line up, that could be a sign of a deepfake too.

AUDIO: If a person’s voice on the other line sounds robotic or glitchy, it could be a deepfake. Listen to the background sound as well—extra static might be a sign that something’s not right.

Courtesy of Google

A.I. at Your Fingertips

A.I. has been around for a while, but it’s easier to access today than ever. The new Google Pixel 8 and Pixel 8 Pro have a “Magic Editor” feature that lets users edit photos using A.I.—from swapping facial features to moving or removing background objects.

Shutterstock.com

These images, taken from a Google ad, show how easy it is to alter reality using the new “Magic Editor” feature.

These images, taken from a Google ad, show how easy it is to alter reality using the new “Magic Editor” feature.

videos (1)
Skills Sheets (5)
Skills Sheets (5)
Skills Sheets (5)
Skills Sheets (5)
Skills Sheets (5)
Lesson Plan (1)
Leveled Articles (1)
Text-to-Speech