You open TikTok and are greeted by a hilarious video you can’t wait to share with your friends. By now, the app knows exactly which types of clips you enjoy and serves up a steady stream of them to keep you hooked.
Those videos, however, aren’t chosen for you by a human. TikTok employs a powerful computer algorithm to analyze user behavior. The more the technology is used, the “smarter” it gets: Every swipe, tap, and video viewed by TikTok users around the world—billions of data points a day—is fed into databases, which then help the system determine what will keep users’ attention.
This is just one example of artificial intelligence (A.I.), or computer systems that perform tasks normally requiring human-like thought processes, such as making predictions, creating strategies, or recognizing what’s in an image or video.
Whether you realize it or not, you probably interact with A.I. regularly. Streaming sites such as Netflix and Spotify use A.I. to recommend content to users. Map apps use it to predict traffic. And digital voice assistants such as iPhone’s Siri and Amazon’s Alexa rely on it to understand our questions about trivia and the weather forecast.
A.I. can provide information or solutions faster than humans, and that means it has the potential to help improve the world. But recently the technology has come under fire. Facebook has faced intense criticism over revelations that it has long known that the A.I. algorithms on its apps, including Instagram, can harm teenagers by feeding them content that makes them more anxious or depressed (see “The Furor Over Facebook”).
As A.I. systems are increasingly used for everything from sorting through college applications to controlling military weapons, many people are beginning to wonder: What does the future hold for A.I.? Should we be afraid that it will do more harm than good?