Several tech giants recently banned or limited the police from using their facial recognition products, in response to nationwide protests calling for an end to police brutality and discrimination against Black Americans.
On June 8, IBM announced that it would stop producing facial recognition items and condemned their use in a letter to Congress. In the days following, Amazon decided to ban the police from using its facial recognition technology for one year and Microsoft announced that it would not sell facial recognition products to police in the U.S. until a national law was put in place to regulate their use.
Facial recognition products and similar technologies have come under fire for their algorithms having potential biases against women and racial minorities. For example, a reputable software developer claimed the Apple Card spending limit algorithms were biased against women applying for credit. Researchers have also recognized certain flaws within the use of facial recognition including inaccuracies and misidentifications, particularly for people of color.
In particular, the dangers of utilizing facial recognition in the criminal justice system call into question whether tech companies like Amazon have done enough by banning the use of facial recognition by police for only one year.
“We know that the use of artificial intelligence in policing, and in the criminal justice system overall, has been highly problematic in the U.S.,” said Marie-Michelle Strah ’99, a visiting fellow in artificial intelligence and human rights at the Center for International Human Rights at John Jay College of Criminal Justice in New York.
The U.S. criminal justice system relies on the Correctional Offender Management Profiling for Alternative Sanctions, a software intended to assess the likelihood that a defendant would commit a crime in the future.
A study by ProPublica found that the algorithm misidentified Black individuals as likely to commit a crime at almost twice the rate of white individuals.
Biases in facial recognition systems also led to the wrongful arrest of Robert Julian-Borchak Williams in June. The algorithm misidentified him with a still image of a shoplifter from a surveillance video captured in October 2018. After police falsely arrested him, Williams was kept in custody 30 hours — he was then released on a $1,000 personal bond.
The misuse calls into question who is at fault for the negative consequences of bias in algorithms. Companies like Amazon, IBM and Microsoft aren’t necessarily selling a software solution, but rather access to a cloud platform, according to Strah.
This distinction helps large tech companies claim plausible deniability because they argue that they were unaware that other companies used their platform in a biased way.
For example, an investigative reporter from The Wall Street Journal claimed Amazon was altering their algorithms to see more profits. In response, Amazon stated that profitability was only one factor that it considers when displaying products to customers, and that it does explicitly consider whether the product is an Amazon brand.
Strah said that since facial recognition is still in its infancy, it may be seen as beneficial right now, as its inefficiencies haven’t been realized yet. Because of this concern, she supported Microsoft’s position that standards and regulations are necessary before the benefits of facial recognition can be fully explored.
But to find solutions to these problems, tech developers must first figure out the sources of bias in technology.
“The possibility for bias in terms of race, gender, and socioeconomic status is built into any technology because technologies are just tools that are built by humans,” Strah said. “Any piece of software has the potential to exclude certain groups or be biased against certain groups.”
Strah said that this ultimately means algorithms are going to be just as biased as the humans who designed them, contradicting the notion that artificial intelligence can eliminate human bias. She added that involving humans in the hiring process could actually reduce bias because humans can stop, reflect and fix biases whereas artificial intelligence cannot.
A study conducted by the MIT Media Lab found that a widely used facial recognition data set was estimated to be more than 75 percent male and more than 80 percent white faces, causing inaccuracies in distinguishing women and people of color — especially when it came to correctly determining the gender of darker-skinned women.
“The ‘science of phrenology’ from 150 years ago has been completely debunked as being racist, biased and inaccurate,” Strah said.
Faulty behavioral science could also explain some of the inaccuracies in facial recognition. One part of the problem, Strah said, is that most facial recognition softwares use static images — which aren’t realistic. As people move about the world, many factors like their posture, emotions and expressions are likely to be variable.
“You’re making assumptions about people, about their expression, which is not how we interact and operate in the normal world,” Strah said.
Embedding these assumptions into technology introduces the possibility of misinterpretation on cultural grounds. For instance, Strah said that she waves her hands quite frequently while speaking and certain cultures could perceive such actions as aggressive. Similar to aggression, anxiety and ill intent could also present themselves differently depending on the race and culture of the individual.
“What does nervous look like? What does nervous look like in an African American man versus an Asian American woman?” Strah said.
Chinasa Okolo, a Ph.D. student in computer science, said there’s a phrase in computer science called “garbage in, garbage out” that means that low quality input means low quality output. “We see that a lot of data sets that are being used to train these algorithms are not diverse themselves,” Okolo said.
Besides relying on data set quality, algorithms are susceptible to prejudices of developers as well.
“A lot of these algorithms that have been developed, whether they are in healthcare or policing or visual recognition, are basically including the biases of the people who develop them,” Okolo said.
Okolo said that these biases are not new, citing times when women were not allowed to get credit cards or to open bank accounts without their husbands’ permission. These biases are still being perpetuated, just through a different medium, according to Okolo.
While large technology companies like Amazon and Microsoft received some positive press for their reduction and elimination of facial recognition sales, others claimed that a complete ban of facial recognition tools is the only ethical solution.
“Essentially, I think it’s a great start,” Okolo said. “Maybe it’s not even necessary for these technologies to be developed in the first place. They’re exacerbating biases and also putting lives of marginalized people at risk.”