Thousands rally in Pittsburgh in October after the shooting at Tree of Life Synagogue.

 Aaron Jackendoff/SOPA Images/LightRocket via Getty Images

Is Social Media Fueling Hate?

The recent mass shooting at a synagogue in Pittsburgh has brought public attention to hate speech on social media

It was the deadliest attack on Jews in American history: In October, a gunman armed with an AR-15-style assault rifle and at least three handguns barged in on Saturday morning prayers at Tree of Life Synagogue in Pittsburgh, killed 11 members of the congregation, and wounded four police officers and two others.

As the nation mourned the tragedy, the hate that had spawned it continued to swell on social media, with anti-Semitic videos and images surging on Instagram. Just two days later, a search for the word Jews revealed 11,696 posts with the hashtag #jewsdid911, falsely claiming that Jews had orchestrated the September 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology.

The Instagram posts demonstrate a stark reality. Social media sites have given people from all over the world a platform to make their voices heard—and that includes white supremacists, neo-Nazis, and other extremists, who are increasingly using these networks as megaphones to spread disinformation and hate speech.

“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” says Jonathan Albright of Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”

In the wake of the Pittsburgh shooting, many are now calling on social media companies to do more to regulate hate speech. Twitter, YouTube, and Facebook (which owns Instagram) have already invested millions of dollars in trying to identify and remove such speech. But determining how to weed out hate speech and disinformation while also protecting free speech has been a challenge.

In October, a gunman armed with an AR-15-style assault rifle and at least three handguns barged in on Saturday morning prayers at Tree of Life Synagogue in Pittsburgh. He killed 11 members of the congregation, and wounded four police officers and two others. It was the deadliest attack on Jews in American history. 

As the nation mourned the tragedy, the hate behind it continued to grow on social media. Anti-Semitic videos and images surged on Instagram. Just two days later, a search for the word Jews revealed 11,696 posts with the hashtag #jewsdid911. These posts falsely claimed that Jews had orchestrated the September 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology.

The Instagram posts show a stark reality. Social media sites have given people from all over the world a platform to make their voices heard. That includes white supremacists, neo-Nazis, and other extremists. These groups are increasingly using these sites as megaphones. With them, they’re spreading disinformation and hate speech.

“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” says Jonathan Albright of Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”

In the wake of the Pittsburgh shooting, many are now calling on social media companies to do more. They want these companies to put tougher restrictions on hate speech. Twitter, YouTube, and Facebook (which owns Instagram) have already tried to identify and remove such speech. In fact, they’ve invested millions of dollars in this effort. But determining how to weed out hate speech and disinformation while also protecting free speech has been a challenge.

Enabling Extremists?

Many experts say that social media is helping to fuel real violence. Anti-Semitic incidents are on the rise in the U.S., according to the Anti-Defamation League (A.D.L.). In 2017, the A.D.L. reports, there were 1,986 anti-Semitic incidents, such as harassment, vandalism, and physical assaults. That’s an increase of almost 60 percent from the year before—the largest jump in the U.S. since the A.D.L. started keeping track in 1979.

Hate crimes against other minority groups are also increasing. A recent report by the Federal Bureau of Investigation (F.B.I.) found that hate crimes rose 17 percent last year. 

Many experts say that social media is helping to fuel real violence. Anti-Semitic incidents are on the rise in the U.S., according to the Anti-Defamation League (A.D.L.). In 2017, the A.D.L. reports, there were 1,988 anti-Semitic incidents. That includes things like harassment, vandalism, and physical assaults. These type of incidents were up almost 60 percent from the year before. That’s the largest jump ever recorded in the U.S.

Hate crimes against other minority groups are also increasing. A recent report by the Center for the Study of Hate and Extremism found that hate crimes in the nation’s largest cities rose by 12.5 percent last year. 

Social media has given extremists a megaphone.

“Social media companies have created, allowed, and enabled extremists to move their message from the margins to the mainstream,” says Jonathan A. Greenblatt, chief executive of the A.D.L. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”

Indeed, recent incidents have shown how hate speech on social media can turn into physical violence in the real world. Robert Bowers, the suspect in the Pittsburgh synagogue massacre, had posted about his hatred of Jews on social media. And just days before the shooting, police arrested Cesar Sayoc Jr., charging him with sending explosive devices to prominent Democrats, including Barack Obama and Hillary Clinton. Sayoc appears to have been radicalized by partisan posts on Twitter and Facebook.

The effects of hate speech on social media have also been evident globally. High-ranking members of the Myanmar military have used doctored messages on Facebook to foment anxiety about and fear of the Muslim Rohingya minority group. And in India, false stories on WhatsApp about child kidnappings led mobs to murder more than a dozen people this year.

“Social media companies have created, allowed, and enabled extremists to move their message from the margins to the mainstream,” says Jonathan A. Greenblatt, chief executive of the A.D.L. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”

Indeed, recent incidents have shown how hate speech on social media can turn into physical violence in the real world. Robert Bowers, the suspect in the Pittsburgh synagogue massacre, had posted about his hatred of Jews on social media. And just days before the shooting, police arrested Cesar Sayoc Jr. and charged him with sending explosive devices to prominent Democrats, including Barack Obama and Hillary Clinton. Sayoc appears to have been radicalized by partisan posts on Twitter and Facebook.

The effects of hate speech on social media have also been evident globally. High-ranking members of the Myanmar military have used doctored messages on Facebook to stir up anxiety about and fear of the Muslim Rohingya minority group. And in India, false stories on WhatsApp about child kidnappings led mobs to murder more than a dozen people this year.

Policing Hate Speech

In the U.S., the First Amendment to the Constitution prevents the government from punishing or censoring speech—even many forms that may be considered hate speech. But the First Amendment doesn’t prevent private companies, including social media sites, from doing so.

Facebook uses artificial intelligence (AI) and about 20,000 employees to weed out hate speech, as well as fake news. But clamping down on hate speech has proved to be difficult. In the first three months of 2018, Facebook took down 2.5 million pieces of hate speech. But only 38 percent of them had been flagged by Facebook’s AI. The majority of them had to be reported to the company by other users.

A big problem social media companies face is defining what constitutes hate speech in the first place. Facebook has been criticized over the years by both conservatives and liberals who say it has discriminated against certain viewpoints.

Many say the rise in hate speech on social media is the result of a political atmosphere that’s becoming increasingly hostile. Critics of President Trump point out that he often insults his opponents on Twitter and accuse him of provoking anger by making derogatory statements about immigrants and other minorities. The president, however, says he wants to heal America’s political divide.

Meanwhile, as mainstream social media companies crack down on hate speech, many extremists are airing their views in darker corners of the web. For example, Bowers, the suspect in the synagogue shooting, posted anti-Semitic conspiracy theories on Gab, a two-year-old social network that bills itself as a “free speech” alternative to other social media sites. Experts say Gab had become a haven for conspiracy theories and hate speech.

Many experts say more needs to be done to restrict hate speech on social media before it has more real-world consequences.

“There needs to be both better technology and better policy around issues related to hate speech,” says Joan Donovan, who studies media for the Data & Society Research Institute, “especially when it relates to the condemnation of marginalized groups.”

In the U.S., the First Amendment to the Constitution prevents the government from punishing or censoring speech. It even protects many forms of speech that may be considered hate speech. But the First Amendment doesn’t prevent private companies from censoring hate speech. That includes social media sites.

Facebook has an intricate strategy to weed out hate speech, as well as fake news. Its system consists of artificial intelligence and about 20,000 employees. But putting an end to hate speech on the site has proved to be difficult. In the first three months of 2018, Facebook took down 2.5 million pieces of hate speech. But only 38 percent of them had been flagged by Facebook’s internal tools. The majority of them had to be reported to the company by other users.

A big problem social media companies face is defining what is hate speech in the first place. Facebook has been criticized over the years by both conservatives and liberals. People on both sides say the company has discriminated against certain viewpoints. 

Many say the rise in hate speech on social media is the result of a political atmosphere that’s becoming increasingly hostile. Critics of President Trump point out that he often insults his opponents on Twitter. They also accuse him of provoking anger by making derogatory statements about immigrants and other minorities. The president, however, says he wants to heal America’s political divide.

Meanwhile, mainstream social media companies have continued cracking down on hate speech. But many extremists are still airing their views in darker corners of the web. For example, Bowers, the suspect in the synagogue shooting, posted anti-Semitic conspiracy theories on Gab. This two-year-old social network bills itself as a “free speech” alternative to other social media sites. Experts say Gab had become a haven for conspiracy theories and hate speech. The site went offline after the shooting, but vowed to return. 

In the meantime, many experts say more needs to be done to restrict hate speech on social media. And it’s important to do so before these type of posts have more real-world consequences.

“There needs to be both better technology and better policy around issues related to hate speech,” says Joan Donovan, who studies media for the Data & Society Research Institute, “especially when it relates to the condemnation of marginalized groups.”

With reporting by Sheera Frenkel, Mike Isaac, and Kate Conger of The New York Times.

By the Numbers

Hate Speech on Social Media

4.2 million

NUMBER of anti-Semitic posts on Twitter in 2017.

Source: Anti-Defamation League

NUMBER of anti-Semitic posts on Twitter in 2017.

Source: Anti-Defamation League

72%

PERCENTAGE of Americans who think social media companies should remove hate speech.

Source: Freedom Forum Institute

PERCENTAGE of Americans who think social media companies should remove hate speech.

Source: Freedom Forum Institute

8 million

NUMBER of videos removed from YouTube for inappropriate content during the first three months of 2018.

Source: YouTube

NUMBER of videos removed from YouTube for inappropriate content during the first three months of 2018.

Source: YouTube

How You Can Stop Online Hate

Consider the Consequences: Before you post on social media, ask yourself this question: Are you sure? Consider how you’d feel if the post ended up causing harm offline.

Label the Speech:If someone you know posts something offensive, let that person know that the language is hurtful or dangerous and explain why. That person might react defensively, but at least you made him or her think.

Change the Tone: Instead of fighting online hate with more hate, de-escalate by showing empathy, finding common ground, and responding with kindness.

Report It: You can report hate speech on social media. Also, tell an adult if a post makes you feel unsafe.

Consider the Consequences: Before you post on social media, ask yourself this question: Are you sure? Consider how you’d feel if the post ended up causing harm offline.

Label the Speech:If someone you know posts something offensive, let that person know that the language is hurtful or dangerous and explain why. That person might react defensively, but at least you made him or her think.

Change the Tone: Instead of fighting online hate with more hate, de-escalate by showing empathy, finding common ground, and responding with kindness.

Report It: You can report hate speech on social media. Also, tell an adult if a post makes you feel unsafe.

videos (1)
Skills Sheets (3)
Skills Sheets (3)
Skills Sheets (3)
Leveled Articles (1)
Text-to-Speech