agsandrew/Shutterstock.com

The Rise of A.I.

Artificial intelligence is increasingly becoming a part of our everyday lives. Should we embrace the technology or fear it?

You open TikTok and are greeted by a hilarious video you can’t wait to share with your friends. By now, the app knows exactly which types of clips you enjoy and serves up a steady stream of them to keep you hooked.

Those videos, however, aren’t chosen for you by a human. TikTok employs a powerful computer algorithm to analyze user behavior. The more the technology is used, the “smarter” it gets: Every swipe, tap, and video viewed by TikTok users around the world—billions of data points a day—is fed into databases, which then help the system determine what will keep users’ attention.

This is just one example of artificial intelligence (A.I.), or computer systems that perform tasks normally requiring human-like thought processes, such as making predictions, creating strategies, or recognizing what’s in an image or video.

Whether you realize it or not, you probably interact with A.I. regularly. Streaming sites such as Netflix and Spotify use A.I. to recommend content to users. Map apps use it to predict traffic. And digital voice assistants such as iPhone’s Siri and Amazon’s Alexa rely on it to understand our questions about trivia and the weather forecast.

A.I. can provide information or solutions faster than humans, and that means it has the potential to help improve the world. But recently the technology has come under fire. Facebook has faced intense criticism over revelations that it has long known that the A.I. algorithms on its apps, including Instagram, can harm teenagers by feeding them content that makes them more anxious or depressed (see “The Furor Over Facebook”).

As A.I. systems are increasingly used for everything from sorting through college applications to controlling military weapons, many people are beginning to wonder: What does the future hold for A.I.? Should we be afraid that it will do more harm than good?

You open TikTok and come across a hilarious video you can’t wait to share with your friends. By now, the app knows exactly which types of clips you enjoy. It serves up a steady stream of them to keep you hooked.

But those videos aren’t chosen for you by a human. TikTok uses a powerful computer algorithm to analyze user behavior. The more the technology is used, the “smarter” it gets. Every swipe, tap, and video viewed by TikTok users around the world is fed into databases. That’s billions of data points a day. This information helps the system find what will keep users’ attention.

This is just one example of artificial intelligence (A.I.), or computer systems that perform tasks normally requiring human-like thought processes. That includes functions like making predictions, creating strategies, or recognizing what’s in an image or video.

Whether you realize it or not, you probably interact with A.I. all the time. Streaming sites such as Netflix and Spotify use A.I. to suggest content to users. Map apps use it to predict traffic. And digital voice assistants such as iPhone’s Siri and Amazon’s Alexa rely on it to understand our questions about trivia and the weather forecast.

A.I. can provide information or solutions faster than humans. That means it can be used to help improve the world. But recently the technology has come under fire. Facebook, in particular, has faced intense criticism. That’s because the company has long known that the A.I. algorithms on its apps, including Instagram, can harm teenagers by feeding them content that makes them more anxious or depressed (see “The Furor Over Facebook” ).

A.I. systems are increasingly used for everything from sorting through college applications to controlling military weapons. That has many people starting to wonder: What does the future hold for A.I.? Should we be afraid that it will do more harm than good?

‘Now would be a great time to stop and think about the progress we’re making.’

“Artificial intelligence is a young field that hasn’t really acquired wisdom yet,” says Sasha Luccioni, a researcher at Mila, an institute in Montreal studying A.I. systems. “Now would be a great time to stop and think about the progress we’re making.”

Scientists first began developing A.I. in the 1950s, when it was used for things like translating spoken language for the government. It’s come a long way since. These days, A.I. can beat humans in video games, write articles (though not ones as complex as this), control robots, pilot drones, evaluate college and job applications, drive cars (but not without some accidents), generate images of human faces indistinguishable from real ones—and much more.

Researchers say that although A.I. is useful, it’s still crude and cumbersome right now. Language written by computers often doesn’t make sense, for example. (Case in point: The winning entry in last year’s international A.I. Song Contest contained the lyrics “Do the cars come with push-ups?”) And with more complex tasks, such as operating self-driving cars, A.I. doesn’t yet function completely smoothly or safely.

“People don’t realize how hard it is to duplicate human reasoning and our ability to deal with uncertainty,” says Cade Metz, a New York Times reporter and author of a book about A.I. “A self-driving car can recognize what’s around it—in some ways, better than people can. But it doesn’t work well enough to drive anywhere at any time or do what you and I do, like react to something surprising on the road.”

“Artificial intelligence is a young field that hasn’t really acquired wisdom yet,” says Sasha Luccioni, a researcher at Mila, an institute in Montreal studying A.I. systems. “Now would be a great time to stop and think about the progress we’re making.”

Scientists first began developing A.I. in the 1950s. Back then, it was used for things like translating spoken language for the government. It’s come a long way since. These days, A.I. can beat humans in video games. The technology can write articles (though not ones as complex as this). It can control robots, pilot drones, and drive cars (but not without some accidents). It can be used to filter through college and job applications. A.I. can create images of human faces that look exactly like real ones. And it can do so much more.

Researchers say that although A.I. is useful, it’s still unrefined and clunky right now. For example, things written by computers often don’t make sense. Case in point: The winning entry in last year’s international A.I. Song Contest featured the lyrics “Do the cars come with push-ups?” And with more complex tasks, such as operating self-driving cars, A.I. doesn’t yet function completely smoothly or safely.

“People don’t realize how hard it is to duplicate human reasoning and our ability to deal with uncertainty,” says Cade Metz, a New York Times reporter and author of a book about A.I. “A self-driving car can recognize what’s around it—in some ways, better than people can. But it doesn’t work well enough to drive anywhere at any time or do what you and I do, like react to something surprising on the road.”

Luke Sharrett/Bloomberg via Getty Images

Robots assemble a car at a factory in South Carolina.

Full of Potential

But that hasn’t stopped experts from dreaming up ways A.I. could make a difference. One big area where it could be of service: tackling climate change. Researchers say A.I. can be used in a variety of ways to help the planet deal with rising temperatures, from tracking animal populations and modeling how to slow biodiversity loss to predicting how wildfires will burn and designing more energy-efficient buildings.

It could improve lives in other ways too. A.I. can already detect tumors on X-rays before doctors can. And it can also be used to create new medicines and antibiotics. In fact, A.I. is assisting with Covid-19 vaccines by helping scientists understand the virus’s structure and tracking its mutations.

“The idea is to take things that can be, generally speaking, good for humankind and use A.I. to make them stronger or work better,” Luccioni says.

But that hasn’t stopped experts from dreaming up ways A.I. could make a difference. One big area where it could be of service: tackling climate change. Researchers say A.I. can be used in a variety of ways to help the planet deal with rising temperatures. That ranges from tracking animal populations and modeling how to slow biodiversity loss to predicting how wildfires will burn and designing more energy-efficient buildings.

It could improve lives in other ways too. A.I. can already detect tumors on X-rays before doctors can. And it can also be used to create new medicines and antibiotics. In fact, A.I. is assisting with Covid-19 vaccines by helping scientists understand the virus’s makeup and tracking its mutations.

“The idea is to take things that can be, generally speaking, good for humankind and use A.I. to make them stronger or work better,” Luccioni says.

Unintentional Bias

There are costs, however, especially when A.I. is trying to learn patterns of human behavior. To do so, it collects massive amounts of data on consumers. That means your personal digital information—including what you’re looking at, who you’re talking to, what you’re posting and purchasing online—could be added into databases to develop or improve technology that predicts behavior. Privacy experts worry about those details being collected without consent.

That’s not the only aspect of A.I. that makes people nervous. Critics point out that A.I. is often biased against people of color, women, and those with disabilities. Although computers may seem objective, the humans who program the systems can unintentionally let their biases influence the technology. Consider facial recognition software, which uses A.I.: Studies by M.I.T. and the National Institute of Standards and Technology found that although facial recognition worked well on White men, the results were less accurate for everyone else, in part because the images used to train the system didn’t contain enough diversity.

But there are costs, especially when A.I. is trying to learn patterns of human behavior. To do so, it collects massive amounts of data on consumers. That means your personal digital information could be added into databases. These data points include what you’re looking at, who you’re talking to, what you’re posting and purchasing online. This information can be used to create or improve technology that predicts behavior. Privacy experts worry about those details being collected without consent.

That’s not the only aspect of A.I. that makes people nervous. Critics point out that A.I. is often biased against people of color, women, and those with disabilities. Computers may seem objective, but the people who build them might input their biases into the technology. And doing so doesn’t have to be intentional. Consider facial recognition software, which uses A.I. Studies by M.I.T. and the National Institute of Standards and Technology found that although facial recognition worked well on White men, the results were less accurate for everyone else. That’s partly because the images used to train the system didn’t contain enough diversity.

45 million American workers could lose their jobs to automation by 2030.

That bias could lead to people being wrongly identified and punished. In 2019, a faulty facial recognition match led to the arrest of a New Jersey man for a crime he didn’t commit. He was the third person known to be wrongfully arrested based on facial recognition, and in all three cases, the people mistakenly identified were Black men.

“The problem with these A.I. systems is they are, in fact, not intelligent,” says Jon Callas, director of technology projects for the Electronic Frontier Foundation, a nonprofit digital rights group.

Perhaps one of the most universal fears about A.I. is that it will replace humans at work, as companies hire fewer workers when they rely on computers to get work done more efficiently. Some A.I. experts argue that while technology may displace some workers, it will spur economic growth and create more fulfilling jobs—but not everyone agrees. The consulting firm McKinsey & Co. predicts that 45 million U.S. workers will be displaced by automation by 2030.

That bias could lead to people being wrongly identified and punished. In 2019, a faulty facial recognition match led to the arrest of a New Jersey man for a crime he didn’t commit. He was the third person known to be wrongfully arrested based on facial recognition. In all three cases, the people mistakenly identified were Black men.

“The problem with these A.I. systems is they are, in fact, not intelligent,” says Jon Callas, director of technology projects for the Electronic Frontier Foundation, a nonprofit digital rights group.

One of the most universal fears about A.I. is that it will replace humans at work. Companies hire fewer workers when they rely on computers to get work done more efficiently. Some A.I. experts argue that while technology may displace some workers, it will spur economic growth and create more fulfilling jobs. But not everyone agrees. The consulting firm McKinsey & Co. predicts that 45 million U.S. workers will be displaced by automation by 2030.

New Regulations?

And then there’s the nightmare scenario: What if military weapons that utilize A.I. can decide on their own who lives and who dies? Countries including China, Russia, and the U.S. are developing autonomous weapon systems that don’t require human intervention to attack targets. Scientists say these so-called “killer robots” don’t exist yet. But a worrisome story made the news in 2020: A military drone that attacked soldiers in Libya’s civil war may have done so without human control, according
to a report commissioned by the United Nations.

That doesn’t mean, however, that robots are inevitably going to make life-and-death decisions someday. Advocacy groups are pushing for international laws to ensure that humans remain in control. The global Campaign to Stop Killer Robots, for example, has proponents in more than 60 countries fighting for restrictions around the world.

Many A.I. researchers are also starting to consider the ethics of their work, Callas notes. That’s crucial, experts say, because if used thoughtfully, the technology could be a huge boon to society.

But to enjoy all the benefits of A.I., humanity will need to address the concerns about it. Experts say it would help if the teams working on new technology were more diverse. And many would also like to see laws or regulations put in place that restrict how A.I. can be used.

And then there’s the nightmare scenario: What if military weapons that use A.I. can decide on their own who lives and who dies? Countries including China, Russia, and the U.S. are developing autonomous weapon systems. These systems don’t need human direction to attack targets. Scientists say these so-called “killer robots” don’t exist yet. But an alarming story made the news in 2020: A military drone that attacked soldiers in Libya’s civil war may have done so without human control, according to a report commissioned by the United Nations.

But that doesn’t mean that robots are going to make life-and-death decisions someday. Advocacy groups are pushing for international laws to make sure that humans stay in control. For example, the global Campaign to Stop Killer Robots has supporters in more than 60 countries fighting for limits around the world.

Many A.I. researchers are also starting to consider the ethics of their work, Callas notes. Experts say that’s important because the technology could benefit society if it’s used thoughtfully.

But to enjoy all the benefits of A.I., humanity will need to address the concerns about it. Experts say it would help if the teams working on new technology were more diverse. And many would also like to see laws or regulations put in place that restrict how A.I. can be used.

The U.S. has begun grappling with A.I. regulations.

Some governments have already begun contemplating how to handle the situation. In April 2021, the European Union proposed rules banning some uses of A.I. and regulating others. The U.S. has also started grappling with basic standards. Last year, the Federal Trade Commission warned against the sale of A.I. systems that use racially biased algorithms or ones that could deny people employment, housing, credit, insurance, or other benefits. Some states, including California and Washington, have also introduced bills to target algorithmic bias. Many young people are trying to make positive change too (see “Fighting for Fair A.I.,” below)

Ultimately, some A.I. researchers argue, governments will need to collaborate with tech companies to create legislation that’s truly effective.

“The people who’re making the decisions don’t necessarily understand the ins and outs of the technology,” Luccioni, the researcher, says. The current attempts to regulate A.I. are “a good idea, but there should be more effort to put scientists on the policy committees and get policy people to understand the technology. There’s a gap to be bridged.”

Some governments have already begun thinking through how to handle the situation. In April 2021, the European Union proposed rules banning some uses of A.I. and regulating others. The U.S. has also started grappling with basic standards. Last year, the Federal Trade Commission warned against the sale of A.I. systems that use racially biased algorithms or ones that could deny people employment, housing, credit, insurance, or other benefits. Some states, including California and Washington, have also introduced bills to target algorithmic bias. Many young people are trying to make positive change too (see “Fighting for Fair A.I.,” below).

Some A.I. researchers argue that governments will need to partner with tech companies to create laws that are truly effective.

“The people who’re making the decisions don’t necessarily understand the ins and outs of the technology,” Luccioni, the researcher, says. The current attempts to regulate A.I. are “a good idea, but there should be more effort to put scientists on the policy committees and get policy people to understand the technology. There’s a gap to be bridged.”

With additional reporting by Shira Ovide and Kevin Roose of The New York Times.

With additional reporting by Shira Ovide and Kevin Roose of The New York Times.

Courtesy of EncodeJustice.org

Sneha Revanur leads a virtual workshop on A.I. ethics.

Fighting For Fair A.I.

A group of youth activists wants to have a say in the future of artificial intelligence

In 2020, Sneha Revanur, a high school student in San Jose, California, found out her state was contemplating a controversial ballot measure. If it passed, judges would determine whether to hold someone in jail before their court date by using risk-assessment software to predict whether a person was likely to show up for court or get arrested again.

Sneha, now 17, was concerned that this would contribute to racial biases. She began working with other teens who opposed the legislation, raising awareness in their community through informational sessions, social media posts, and more.

The measure was ultimately defeated—and the experience encouraged the group, which became known as Encode Justice, to begin pushing for more ethical A.I. all over the world. Since then, the group has grown to more than 300 members in 25 countries, doing everything from lobbying politicians about facial recognition to teaching virtual workshops on A.I. ethics to more than 3,000 high school students. A.I. isn’t inherently evil, members argue, but it needs to be well-regulated and created by a diverse workforce.

In 2020, Sneha Revanur, a high school student in San Jose, California, found out her state was contemplating a controversial ballot measure. If it passed, judges would determine whether to hold someone in jail before their court date by using risk-assessment software to predict whether a person was likely to show up for court or get arrested again.

Sneha, now 17, was concerned that this would contribute to racial biases. She began working with other teens who opposed the legislation, raising awareness in their community through informational sessions, social media posts, and more.

The measure was ultimately defeated—and the experience encouraged the group, which became known as Encode Justice, to begin pushing for more ethical A.I. all over the world. Since then, the group has grown to more than 300 members in 25 countries, doing everything from lobbying politicians about facial recognition to teaching virtual workshops on A.I. ethics to more than 3,000 high school students. A.I. isn’t inherently evil, members argue, but it needs to be well-regulated and created by a diverse workforce.

‘We’re the generation most directly impacted.’

“If these algorithms were programmed for good, they could be used for good,” Sneha says.

She also believes it’s especially important for youth activists to get involved.

“We’re the generation most directly impacted by these technologies and also the generation that has the least say in how they’re developed or regulated,” she says. “If we don’t have a seat at the table, we don’t have input in how these technologies—that are going to shape our entire reality—are being used and developed.”

“If these algorithms were programmed for good, they could be used for good,” Sneha says.

She also believes it’s especially important for youth activists to get involved.

“We’re the generation most directly impacted by these technologies and also the generation that has the least say in how they’re developed or regulated,” she says. “If we don’t have a seat at the table, we don’t have input in how these technologies—that are going to shape our entire reality—are being used and developed.”

videos (1)
Skills Sheets (6)
Skills Sheets (6)
Skills Sheets (6)
Skills Sheets (6)
Skills Sheets (6)
Skills Sheets (6)
Lesson Plan (1)
Leveled Articles (1)
Text-to-Speech