Fake or highly distorted news is nothing new. The earliest American newspapers were instruments of political parties that often printed lies about the opposition. In the 19th century, sensationalized, exaggerated, and sometimes outright false stories were published to sell newspapers and influence elections, a practice known as “yellow journalism.”
It was only in the 20th century that professional reporting standards of objectivity and accuracy became the norm for newspapers and news programs on radio and TV. That norm began to erode in recent years with the explosion of cable news and the growing popularity of openly partisan content. And social media has made it easier to find an audience for news that’s skewed toward one political viewpoint, isn’t fact-checked, or is simply made up.
“What has changed is not our penchant for creating this material, but our ease in sharing it,” says Littau. “It’s so much easier to pass this material on, and that means fake news stories have much greater reach than before. That’s what’s new, the ability to quickly infect each other with it.”
In some cases, fake news fuels what’s known as the “echo chamber”—people with similar beliefs and biases sharing only what fits their vision of the world. Psychologists call this “confirmation bias.”
After the presidential election, many people called for social media platforms and search engines to limit or ban questionable websites. Some tech companies, like Google and Facebook, vowed to prevent these sites from advertising, but so far they’ve stopped short of banning their articles from being seen and shared.
Facebook CEO Mark Zuckerberg said in November that the company was researching ways to make it easier to detect and report fake news. But he also noted the company’s philosophy of being an open forum: “We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible.”
The question of whether Facebook, Google, or any other company should decide what’s legitimate news has ignited a debate about free speech and censorship. These companies can legally limit what we see on their platforms because the First Amendment’s free speech protections prohibit only the government from censoring speech. But some media experts warn against putting tech firms in the role of vetting what their users are able to share.
“It would be a dangerous precedent if they were deciding what’s journalism,” says Anthony Adornato, a journalism professor at Ithaca College in New York. “I don’t think it’s their job to be the ultimate gatekeepers; the burden should be on well-informed people to make that decision.”
Experts say it will take a concerted effort by the public and the media to fix the problem of misinformation and slow the spread of fake news.
“Users on social media need to call out people who are sharing this stuff, and journalists need to continue to adhere to professional standards,” Adornato says. “It’s a team effort.”