Could artificial intelligence (A.I.) help businesses make hiring easier? That was the goal of engineers for Amazon, who in 2014 began work on a new A.I. tool. It would bypass the biases and errors of human hiring managers by reviewing résumé data, ranking applicants, and identifying top talent.
Instead, though, the machine simply learned to make the kinds of judgments its creators sought to avoid. The tool’s algorithm was trained on data from Amazon’s hires over the prior decade—and since most of the hires had been men, the machine learned that men were preferable. It downgraded résumés that included the word “women’s,” as in “women’s basketball team captain,” and penalized graduates of all-women’s colleges.
Although Amazon quietly scrapped the project, it was just one of many recent examples of human biases creeping into technology. We often think of artificial intelligence as a great equalizer, free from our unconscious preferences or attitudes, but researchers say that technology often simply reinforces our prejudices.
A.I. is increasingly being used in a wide range of fields: Businesses are utilizing it in hiring; police departments are turning to it to help solve crimes; and health care services are employing it to determine what kinds of treatment people get.
As our dependence on technology grows, many experts worry that the hidden biases in A.I. could affect everything from who gets hired for certain jobs to who gets admitted to college and even who gets arrested.
“What algorithms are doing is giving you a look in the mirror,” says Sandra Wachter, an associate professor in law and A.I. ethics at Oxford University in England. “They reflect the inequalities of our society.”