It was the deadliest attack on Jews in American history: In October, a gunman armed with an AR-15-style assault rifle and at least three handguns barged in on Saturday morning prayers at Tree of Life Synagogue in Pittsburgh, killed 11 members of the congregation, and wounded four police officers and two others.
As the nation mourned the tragedy, the hate that had spawned it continued to swell on social media, with anti-Semitic videos and images surging on Instagram. Just two days later, a search for the word Jews revealed 11,696 posts with the hashtag #jewsdid911, falsely claiming that Jews had orchestrated the September 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology.
The Instagram posts demonstrate a stark reality. Social media sites have given people from all over the world a platform to make their voices heard—and that includes white supremacists, neo-Nazis, and other extremists, who are increasingly using these networks as megaphones to spread disinformation and hate speech.
“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” says Jonathan Albright of Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”
In the wake of the Pittsburgh shooting, many are now calling on social media companies to do more to regulate hate speech. Twitter, YouTube, and Facebook (which owns Instagram) have already invested millions of dollars in trying to identify and remove such speech. But determining how to weed out hate speech and disinformation while also protecting free speech has been a challenge.
In October, a gunman armed with an AR-15-style assault rifle and at least three handguns barged in on Saturday morning prayers at Tree of Life Synagogue in Pittsburgh. He killed 11 members of the congregation, and wounded four police officers and two others. It was the deadliest attack on Jews in American history.
As the nation mourned the tragedy, the hate behind it continued to grow on social media. Anti-Semitic videos and images surged on Instagram. Just two days later, a search for the word Jews revealed 11,696 posts with the hashtag #jewsdid911. These posts falsely claimed that Jews had orchestrated the September 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology.
The Instagram posts show a stark reality. Social media sites have given people from all over the world a platform to make their voices heard. That includes white supremacists, neo-Nazis, and other extremists. These groups are increasingly using these sites as megaphones. With them, they’re spreading disinformation and hate speech.
“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” says Jonathan Albright of Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”
In the wake of the Pittsburgh shooting, many are now calling on social media companies to do more. They want these companies to put tougher restrictions on hate speech. Twitter, YouTube, and Facebook (which owns Instagram) have already tried to identify and remove such speech. In fact, they’ve invested millions of dollars in this effort. But determining how to weed out hate speech and disinformation while also protecting free speech has been a challenge.