Illustration by Christopher Short

Is YouTube Radicalizing Us?

The social media platform has been under fire for recommending extreme—and false—content to users

It’s not unusual for Breon McIntosh to spend about three hours of his day watching YouTube. The 17-year-old senior at North Lawndale College Prep in Chicago uses the website to get the news and watch Fortnite videos.

McIntosh usually begins by checking his subscriptions and looking to see what’s trending. He’ll watch one video and then videos that come up on autoplay. Yet the more videos McIntosh watches, the more “the recommendations become a whirlwind of lies, click bait, and fake news,” he says. “Mainly I feel disgusted.”

Media experts say McIntosh has good reason to feel this way. YouTube, the most popular social media platform in the world, has come under fire recently for the way it has seemed to recommend more and more extreme—and often false—content to users.

Searches for the “moon landing” lead to conspiracy videos suggesting the historic 1969 U.S. moon mission was a hoax; watching a video on the Parkland activists brings up videos falsely claiming they were hired actors. Following the 2017 Las Vegas shooting, videos surfaced wrongly suggesting that the government was behind it.

Sociologist Zeynep Tufekci of the University of North Carolina tweeted earlier this year that YouTube is “a giant radicalizing engine.”

“This situation is especially dangerous given how many people—especially young people—turn to YouTube for information,” she wrote in March in The New York Times.

It’s not unusual for Breon McIntosh to spend about three hours of his day watching YouTube. The 17-year-old senior at North Lawndale College Prep in Chicago uses the website to get the news and watch Fortnite videos.

McIntosh usually begins by checking his subscriptions. After that, he looks to see what’s trending. He’ll watch one video and then videos that come up on autoplay. Yet the more videos McIntosh watches, the more “the recommendations become a whirlwind of lies, click bait, and fake news,” he says. “Mainly I feel disgusted.”

Media experts say McIntosh has good reason to feel this way. YouTube is the most popular social media platform in the world. The website has come under fire recently for the way it has seemed to recommend more and more extreme—and often false—content to users.

Searches for the “moon landing” lead to conspiracy videos suggesting the historic 1969 U.S. moon mission was a hoax. Watching a video on the Parkland activists brings up videos falsely claiming they were hired actors. Following the 2017 Las Vegas shooting, videos surfaced suggesting that the government was behind it.

Sociologist Zeynep Tufekci of the University of North Carolina tweeted earlier this year that YouTube is “a giant radicalizing engine.” 

“This situation is especially dangerous given how many people—especially young people—turn to YouTube for information,” she wrote in March in The New York Times.

Keeping You Watching

Worldwide, about 1.5 billion YouTube users watch 1 billion hours of content daily. The site is more popular among teens than Facebook, Instagram, and Snapchat, according to the Pew Research Center. Many, like McIntosh, look to YouTube for news updates, though the website itself doesn’t report any news—it only disseminates what users upload.

YouTube is designed to keep people online as long as possible. It does this by providing a steady stream of video recommendations. Artificial intelligence drives this process. An algorithm groups similar videos on YouTube together, and then ranks the videos within each category based on popularity. This helps YouTube selectively guess what you might like to see next based on your viewing history and on the behavior of other users on the site. Keeping people watching helps the company make money through the video ads shown on the site. 

Why extreme content? Because it’s attention-grabbing, and if it’s related to a video you’ve already watched, it taps into what Tufekci says is our natural curiosity “to dig deeper into something that engages us.”

Worldwide, about 1.5 billion YouTube users watch 1 billion hours of content daily. The site is more popular among teens than Facebook, Instagram, and Snapchat, according to the Pew Research Center. Many, like McIntosh, look to YouTube for news updates. The website itself doesn’t report any news—it only spreads what users upload.

YouTube is designed to keep people online as long as possible. It does this by providing a steady stream of video recommendations. Artificial intelligence drives this process. An algorithm groups similar videos on YouTube together. Then it ranks the videos within each category based on popularity. This helps YouTube selectively guess what you might like to see next. The website bases its suggestions on your viewing history and on the behavior of other users on the site. Keeping people watching helps the company make money through the video ads shown on the site. 

Why extreme content? Because it’s attention-grabbing. And if the content is related to a video you’ve already watched, you stay engaged. It taps into what Tufekci says is our natural curiosity “to dig deeper into something that engages us.”

Artificial intelligence is limited in its ability to determine what’s true.

YouTube’s recommendations strongly influence what people watch, accounting for more than 70 percent of viewing time on the site, according to YouTube. Yet machine learning is limited in its ability to determine what is and isn’t valuable or true, says Christo Wilson, a computer science professor at Northeastern University in Boston. This means extreme, false, or propaganda-type videos might be recommended just as often as truthful content because they’re trending or have a lot of likes.

Others have noticed this problem too, like Guillaume Chaslot, a former employee of Google, which owns YouTube. The computer programmer, who has a Ph.D. in artificial intelligence, helped develop YouTube’s algorithm until 2013, when he was fired. Google says he was let go for poor performance, but Chaslot says it was because he raised concerns about the algorithm and the way it recommended extreme content. 

“I saw the problems that were coming,” he says.

YouTube’s recommendations strongly influence what people watch. They account for more than 70 percent of viewing time on the site, according to YouTube. Yet machine learning is limited in its ability to determine what is and isn’t valuable or true, says Christo Wilson, a computer science professor at Northeastern University in Boston. This means extreme, false, or propaganda-type videos might be recommended just as often as truthful content because they’re trending or have a lot of likes.

Others have noticed this problem too. That includes Guillaume Chaslot, a former employee of Google, which owns YouTube. The computer programmer has a Ph.D. in artificial intelligence. He helped develop YouTube’s algorithm until 2013, when he was fired. Google says he was let go for poor performance. But Chaslot says it was because he raised concerns about the algorithm and the way it recommended extreme content. 

“I saw the problems that were coming,” he says.

via YouTube

You Be the Judge

YouTube has responded to these concerns by working to surface truthful content more often and false content less often. In July it announced a series of changes, including a breaking news section on its homepage with videos from legitimate news organizations. Search results also now include third-party support from Wikipedia and Encyclopedia Britannica to help prevent misinformation on scientific achievements like the moon landing.

Yet misleading content isn’t going away. Collective user behavior determines which videos are trending, which means that extreme or false content can trend if it’s popular. And while the algorithm has improved, according to experts, it’s unclear how well the upgrades will address the problem. Ultimately, users have a responsibility to think critically about what’s showing up in their YouTube feeds (see “Finding Legit Content on YouTube,” below).

It’s not always easy when videos often blur the lines between what’s fact and what’s fiction. But Edwin Burgos, a 16-year-old sophomore at the Manhattan Center for Science and Mathematics in New York City, goes the extra mile to make sure the videos he’s watching are true. If he suspects a video isn’t entirely accurate, he looks at who uploaded the video and when, then searches on the web to find out more.

“People put fake videos on YouTube,” he says, “and I want to trust what I watch.”

YouTube has responded to these concerns by working to surface truthful content more often and false content less often. In July it announced a series of changes. YouTube added a breaking news section on its homepage with videos from legitimate news organizations. Search results also now include third-party support from Wikipedia and Encyclopedia Britannica. This helps prevent misinformation on scientific achievements like the moon landing.

Yet misleading content isn’t going away. Collective user behavior determines which videos are trending. That means that extreme or false content can trend if it’s popular. The algorithm has improved, according to experts. But it’s unclear how well the upgrades will address the problem. Ultimately, users have a responsibility to think critically about what’s showing up in their YouTube feeds (see “Finding Legit Content on YouTube”)

It’s not always easy when videos often blur the lines between what’s fact and what’s fiction. But Edwin Burgos, a 16-year-old sophomore at the Manhattan Center for Science and Mathematics in New York City, goes the extra mile. He always follows a process to make sure the videos he’s watching are true. If he suspects a video isn’t entirely accurate, he looks at who uploaded the video and when, then searches on the web to find out more.

“People put fake videos on YouTube,” he says, “and I want to trust what I watch.”

Finding Legit Content on YouTube

Plenty of truthful, informative videos are available on YouTube. Make sure you’re accessing them by asking yourself these questions:

1. Why am I being shown this?
Remember that artificial intelligence is guessing what you want to see.

2. What makes it popular? 
Just because a video is trending or has a lot of likes, doesn’t make it truthful.

3. Who uploaded it?
Investigate the user who posted the video. Legitimate news organizations fact-check their content. Others don’t always do the same.

4. Do reputable sources confirm it? 
Try to verify the information in the video by doing additional research on other websites.

videos (1)
Skills Sheets (3)
Skills Sheets (3)
Skills Sheets (3)
Leveled Articles (1)
Text-to-Speech