The Hidden Biases in A.I.

Is technology helping to make society more equal or simply reinforcing our prejudices?

Could artificial intelligence (A.I.) help businesses make hiring easier? That was the goal of engineers for Amazon, who in 2014 began work on a new A.I. tool. It would bypass the biases and errors of human hiring managers by reviewing résumé data, ranking applicants, and identifying top talent.

Instead, though, the machine simply learned to make the kinds of judgments its creators sought to avoid. The tool’s algorithm was trained on data from Amazon’s hires over the prior decade—and since most of the hires had been men, the machine learned that men were preferable. It downgraded résumés that included the word “women’s,” as in “women’s basketball team captain,” and penalized graduates of all-women’s colleges.

Although Amazon quietly scrapped the project, it was just one of many recent examples of human biases creeping into technology. We often think of artificial intelligence as a great equalizer, free from our unconscious preferences or attitudes, but researchers say that technology often simply reinforces our prejudices.

A.I. is increasingly being used in a wide range of fields: Businesses are utilizing it in hiring; police departments are turning to it to help solve crimes; and health care services are employing it to determine what kinds of treatment people get.

As our dependence on technology grows, many experts worry that the hidden biases in A.I. could affect everything from who gets hired for certain jobs to who gets admitted to college and even who gets arrested.

“What algorithms are doing is giving you a look in the mirror,” says Sandra Wachter, an associate professor in law and A.I. ethics at Oxford University in England. “They reflect the inequalities of our society.”

Could artificial intelligence (A.I.) help businesses make hiring easier? That was the goal of engineers for Amazon. In 2014, they began work on a new A.I. tool to bypass the biases and errors of human hiring managers. They programmed the tool to review résumé data, rank applicants, and identify top talent.

But the machine simply learned to make the kinds of judgments its creators wanted to avoid. The tool’s algorithm was based on data from Amazon’s hires over the prior decade. Since most of the hires had been men, the machine learned that men were preferable. It downgraded résumés that included the word “women’s,” as in “women’s basketball team captain.” It also gave lower rankings to graduates of all-women’s colleges.

Amazon quietly scrapped the project. Still, it’s just one of many recent examples of human biases creeping into technology. We often think of artificial intelligence as a great equalizer. In other words, we consider A.I. to be free from our unconscious preferences or attitudes. But researchers have a different view. They say that technology often simply supports our prejudices.

A.I. is increasingly being adopted in a wide range of fields. Businesses are using it in hiring. Police departments are turning to it to help solve crimes. And health care services are employing it to determine treatment plans for patients.

As our dependence on technology grows, many experts worry that the hidden biases in A.I. could affect everything from who gets hired for certain jobs to who gets admitted to college and even who gets arrested.

“What algorithms are doing is giving you a look in the mirror,” says Sandra Wachter, an associate professor in law and A.I. ethics at Oxford University in England. “They reflect the inequalities of our society.”

Siri, Google & Facial Recognition

An algorithm is a set of instructions that tells a computer what to do. Biases in technology exist, experts say, because all algorithms are written by humans, who have their own prejudices—whether they’re conscious or unconscious.

Take voice recognition software, like Siri and Alexa, for example. In a study released last year, researchers from Stanford University used these tools to transcribe interviews with people in the U.S. The systems identified words correctly about 80 percent of the time during interviews with white people. But they were far less accurate at understanding Black people, identifying words correctly only about 65 percent of the time.

The researchers say that’s probably because these programs were trained using many more voices of white people than Black people, so they weren’t able to pick up on the nuances in the ways people from different backgrounds speak.

One A.I. tool civil liberties advocates are particularly concerned about is facial recognition. Police are increasingly using it to solve crimes they say would have gone unsolved in the past. These systems help identify perpetrators by matching a suspect’s photo to images gathered from a range of sources, including Facebook, Instagram, and driver’s license databases.

An algorithm is a set of instructions that tells a computer what to do. Experts say that biases in technology exist because of humans. All algorithms are written by people. And every person has prejudices, whether they’re conscious or unconscious.

Take voice recognition software, like Siri and Alexa, for example. In a study released last year, researchers from Stanford University used these tools to transcribe interviews with people in the U.S. The systems identified words correctly about 80 percent of the time during interviews with white people. But they were far less accurate at understanding Black people. In those cases, the systems identified words correctly only about 65 percent of the time.

The researchers say that’s probably because these programs were trained using many more voices of white people than Black people. As a result, they weren’t able to pick up on the nuances in the ways people from different backgrounds speak.

One A.I. tool civil liberties advocates are particularly concerned about is facial recognition. Police are increasingly using it to solve crimes they say would have gone unsolved in the past. These systems help identify people who commit crimes by matching a suspect’s photo to an array of images. These images come from a range of sources, including Facebook, Instagram, and driver’s license databases.

Biases in artificial intelligence can have real-world consequences.

But many facial recognition systems are racially biased, according to a report released in 2019 by the National Institute of Standards and Technology. The federal agency found that these systems falsely identified Black and Asian faces up to 100 times more than white faces—probably the result of being trained mostly on images of white people. Bad facial recognition matches have already led to the documented arrests of three innocent people in the U.S.—all of them Black men.

Skewed algorithms aren’t the only reasons for biased tech, though. Some A.I. tools learn to mirror the prejudices of the people using them. For instance, a 2015 study found that in a Google images search for “CEO,” just 11 percent of the people it displayed were women, even though 27 percent of the chief executives of U.S. businesses are female. Tech experts say that’s probably because people searching for images of CEOs click more on images of men than women. So the computer learns to show more male photos—reinforcing old stereotypes about women’s roles in the workplace.

But many facial recognition systems are racially biased, according to a report released in 2019 by the National Institute of Standards and Technology. The federal agency found that these systems falsely identified Black and Asian faces up to 100 times more than white faces. That’s probably because these systems have been trained on images of white people. Bad facial recognition matches have already led to the documented arrests of three innocent people in the U.S. All of them were Black men.

But skewed algorithms aren’t the only reasons for biased tech. Some A.I. tools learn to mirror the prejudices of the people using them. For example, 27 percent of the chief executives of U.S. businesses are female. But a 2015 study found that in a Google images search for “CEO,” just 11 percent of the people it displayed were women. Tech experts say that’s probably because people searching for images of CEOs click more on images of men than women. So the computer learns to show more male photos. In turn, that reinforces old stereotypes about women’s roles in the workplace.

Fixing A.I.

Experts say one way to improve A.I. is to make the teams working on new technology more diverse. Women currently hold fewer than 25 percent of the technical roles at major tech companies, such as Facebook. Black and Hispanic people are also very underrepresented in these jobs (see graphs, below).

Some advocates are also calling for more government regulation to force companies to be more transparent about how they create their algorithms.

“It’s unrealistic to assume that we’ll ever have a neutral system,” says Wachter, the Oxford professor. “But with the right systems in place, we can mitigate some of the biases.”

Experts say one way to improve A.I. is to make the teams working on new technology more diverse. Women currently hold fewer than 25 percent of the technical roles at major tech companies, such as Facebook. Black and Hispanic people are also very underrepresented in these jobs (see graphs, below).

Some advocates are also calling for more government regulation to force companies to be more transparent about how they create their algorithms.

“It’s unrealistic to assume that we’ll ever have a neutral system,” says Wachter, the Oxford professor. “But with the right systems in place, we can mitigate some of the biases.”

With reporting by Corinne Purtill and Jamie Condliffe of the Times.

With reporting by Corinne Purtill and Jamie Condliffe of the Times.

videos (1)
Skills Sheets (3)
Skills Sheets (3)
Skills Sheets (3)
Leveled Articles (1)
Text-to-Speech