iStock/Getty images

Fighting Fake News

Made-up stories are taking over the internet. Are tech companies doing enough to stop the spread?

The results were disturbing. Late last year, when Google users typed the phrase, “Did the Holocaust happen?” into the search engine, the first hit was a story—posted on a website run by a hate group—falsely stating that the mass murder of millions of Jews in Europe during World War II (1939-45) never occurred.

That lie—and the fact that it was the top result on the world’s most popular search engine—highlighted the growing problem of fake news. Misinformation published online often spreads faster than it can be challenged. Bogus stories shared extensively on social media are then ranked high by Google and other search engines, making them easier to find—and increasing people’s sense of their credibility.

“Just because something is in the top five of your search results doesn’t mean it’s reliable,” says Jonathan Anzalone of the Center for News Literacy at Stony Brook University in New York.

The information may be made up, but it can have real-world consequences. In December 2016, a man was arrested at a pizzeria in Washington, D.C., after firing a rifle inside the eatery. He claimed he was “self-investigating” a story that the restaurant had connections to a human trafficking ring linked to the Democratic Party. The story was fake.

Since then, tech companies have come under fire for not doing enough to address the phony stories. Now Google and Facebook—among the biggest distributors of fake news—are rolling out strategies to combat it.

The results were disturbing. Late last year, when Google users typed the phrase “Did the Holocaust happen?” into the search engine, the first hit was a story falsely stating that the mass murder of millions of Jews in Europe during World War II (1939-45) never occurred. It had been posted on a website run by a hate group.

That lie was the top result on the world’s most popular search engine. This highlights the growing problem of fake news. Misinformation published online often spreads faster than it can be challenged. Bogus stories shared widely on social media are then ranked high by Google and other search engines. This makes them easier to find and increases people’s sense of their credibility. 

“Just because something is in the top five of your search results, doesn’t mean it’s reliable,” says Jonathan Anzalone of the Center for News Literacy at Stony Brook University in New York.

The information may be made up, but it can have real-world consequences. In December 2016, a man was arrested at a pizzeria in Washington, D.C, after firing a rifle inside the eatery. He claimed he was “self-investigating” a story that the restaurant had connections to a human trafficking ring linked to the Democratic Party. The story was fake.

Since then, tech companies have come under fire for not doing enough to address the phony stories. Google and Facebook are among the biggest distributors of fake news. They've begun rolling out ways to fight it.

 

Google & Facebook

Google recently announced plans to minimize the reach of fake news in its searches. The tech giant has assigned more than 10,000 employees to electronically flag articles containing misleading information. This ensures they’re ranked lower in Google search results and that users can see that they contain highly suspect or false information.

For example, a recent, widely shared story claimed that 300,000 pounds of rat meat were being sold as chicken wings across the U.S. That article is now tagged as false. In addition, a link directs users to a fact-check of the story by the nonpartisan site PolitiFact.com. 

Facebook is launching its own fact-check tool, and the site is working to delete phony accounts run by “bots”—or web robots—that automatically “like” and share fake news. Facebook has also experimented with a more old-fashioned approach to stop misinformation: Before recent elections in Great Britain, the site placed ads in local newspapers about how to spot fake news.

Google recently announced plans to decrease the reach of fake news in its searches. The tech giant has assigned more than 10,000 employees to electronically flag articles containing misleading information. This ensures they’re ranked lower in Google search results. Users can also see that these pieces contain highly suspect or false information.

For example, a recent, widely shared story claimed that 300,000 pounds of rat meat were being sold as chicken wings across the U.S. That article is now tagged as false. In addition, a link directs users to a fact-check of the story by the nonpartisan site PolitiFact.com.

Facebook is launching its own fact-check tool. The site is also working to delete phony accounts run by “bots.” These web robots automatically “like” and share fake news. Facebook has also tested a more old-fashioned approach to stop misinformation. Before recent elections in Great Britain, the site placed ads in local newspapers about how to spot fake news.    

‘Attracting Eyeballs’

Many experts, however, say the tech companies aren’t doing enough.

“[Tech companies] make money by attracting and keeping eyeballs,” says Matthew A. Baum, a public policy professor at Harvard University in Massachusetts. “They talk and talk and do very little. And the stuff they are doing is, at best, marginally effective.”

One problem, he says, is that fact-checking individual articles takes too much time to slow fast-spreading fake news. Instead, internet platforms should evaluate the credibility of websites as a whole, he says. Any content from a site that consistently publishes false stories should rank low in results lists--—and rise only if the entire site starts producing more reliable information, says Baum.  

Making fake news sites harder to find is key, Baum says, because labeling stories as false doesn’t necessarily stop people from believing them.

“We tend to accept something as true the more we encounter it,” he says. So if you read a story that’s been labeled false, “you might forget it was declared bogus and just remember that you saw it.” If you see that story again later, you’re more likely to fall for it, Baum explains. 

Tech companies should also be more aggressive about deleting fake accounts, according to Baum. Facebook recently deleted 30,000 such accounts, but in 2013, the company estimated that it had as many as 138 million phony accounts.

But many experts say the tech companies aren’t doing enough.

“[Tech companies] make money by attracting and keeping eyeballs,” says Matthew A. Baum, a public policy professor at Harvard University in Massachusetts. “They talk and talk and do very little. And the stuff they are doing is, at best, marginally effective.”

One problem is that fact-checking individual articles takes too much time to slow fast-spreading fake news. Instead, internet platforms should evaluate the credibility of websites as a whole, Baum says. Any content from a site that consistently publishes false stories should rank low in results lists—and only rise if the entire site starts producing more reliable information, he adds. 

Making fake news sites harder to find is key because labeling stories as false doesn’t always stop people from believing them, Baum says.

“We tend to accept something as true the more we encounter it,” he explains. So, if you read a story that’s been labeled false, “you might forget it was declared bogus and just remember that you saw it.” If you see that story again later, you’re more likely to fall for it.

Tech companies should also be more aggressive about deleting fake accounts, according to Baum. Facebook recently deleted 30,000 such accounts. But, in 2013, the company estimated that it had as many as 138 million phony accounts.    

Educating Readers

For now, the best tool for stopping fake news may be educating people to be more skeptical about what they read (see “What You Can Do”). Washington state passed a law last spring that would encourage public schools to offer media literacy classes. Several other states are considering similar legislation.

The classes teach students how to analyze information from websites, TV, and other forms of media. Anzalone, at the Center for News Literacy, says they also help students recognize their natural biases.

“We don’t like to receive information that may conflict with what we believe,” he says. So when we read something that affirms what we think—even if it’s wrong—we’re more likely to accept it. That’s why we need to think critically about what we’re reading, he says. The end goal is about more than being able to spot a made-up story, says Anzalone.  “It’s about valuing and seeking truth—and knowing how to find it.”

For now, the best tool for stopping fake news may be educating people to be more skeptical about what they read. Washington state passed a law last spring that would encourage public schools to offer media literacy education. Several other states are considering similar legislation.

   The classes teach students how to analyze information from websites, TV, and other forms of media. Anzalone, at the Center for News Literacy, says they also help students recognize their natural biases.

“We don’t like to receive information that may conflict with what we believe,” he says. So when you read something that affirms what you think, you’re more likely to accept it, even if it’s wrong. That’s why people need to think critically about what they’re reading, he says. The end goal is about more than being able to spot a made-up story, says Anzalone. “It’s about valuing and seeking truth—and knowing how to find it.”    

videos (1)
Skills Sheets (4)
Skills Sheets (4)
Skills Sheets (4)
Skills Sheets (4)
Text-to-Speech