October 7, 2019

DELGADO | Artificial Intelligence’s Exclusivity Issue

Print More

Humans have created systems to simplify global problem-solving and expedite learning for almost a century. Artificial intelligence is cited by some industry leaders as the next big breakthrough in human technological evolution. Detractors claim that AI poses a unique range of challenges. Tesla CEO Elon Musk expressed the potential dangers of AI and how future overreliance on AI could lead to the downfall of human creativity. Musk referred to humanity as the “biological boot loader” for computer programming. Sure, the idea that AI could outpace humans’ minds and that AI would eventually destroy our world is an issue, but there’s one problem with AI that everyone is beginning to see: mass discrimination.

AI reflects systemic, social biases when it is exposed to biased sample groups and developed by biased researchers. In the 1980s, a medical school in the U.K. received an overwhelming amount of applicants. In response, it created its own computer algorithm, selecting students based on a variety of characteristics that would make them the most qualified attendees. After running the program and admitting students, researchers of the study found the AI system discriminated against women and people of color.

In 2015, Google’s Photos application, which has the ability to categorize pictures, categorized several African Americans as gorillas. Users claimed Nikon’s cameras mistook Asian Americans to be blinking in their photos. A 2015 Carnegie Mellon study determined that Google showed fewer ads for higher-paying jobs to women than men. Many companies who encounter these issues claim they are unintentional. Their leaders expect their AI systems to remain proprietary. Many of the people that AI discriminate against are not even aware of it because of the opacity of decision-making.

What Can Organizations Do?

Organizations can document their approach on how to mitigate the amount of bias fed to a system from the beginning of its lifecycle to increase the transparency of the development process. Linking humans and machines to learn alongside each other may reveal ways we are “partial, parochial and cognitively biased, leading us to adopt more impartial or egalitarian views.” Systemic bias can be minimized, if not eliminated, by allowing the machine to learn from a diverse sample and by making the necessary corrections in our own thinking.

University of Amsterdam Prof. Frederik Zuiderveen Borgesius, law, argued that utilizing artificial intelligence in fields like policing and criminal justice, where bias is already heavily present, poses unique challenges. A system that categorizes individuals along racial lines, for example, while ignoring socioeconomic realities such as poverty, upbringing and home environments, could make the same mistakes which lead to police officers taking innocent lives.

IBM believes that a crucial principle of mankind is to avoid bias, ultimately preventing any form of discrimination. IBM has developed transparent systems that will allow all current and future AI networks to be graded for bias. In addition, IBM aims to teach machines to apply certain human virtues, such as empathy, when making judgments. This idea of a humane system of artificial intelligence could work to address the concerns raised by Professor Borgesius and others.

AI, in the limited capacity that we use it every day, has already shown inefficiency at filtering out the results of human bias. As a society, we have to make a decision on the moral price of innovations in technology. Past decisions to seek technical progress at any cost have led to the destruction of our ecosystems through pollution and the exploitation of marginalized groups through aggressive, unregulated capitalism.

Future AI development requires a clear analysis of our society. Our society is not ready to create a form of pseudo-life if we cannot be wise stewards towards it. The idea is especially problematic if we will be expected to lean on artificial intelligence to make decisions that have a potent impact on society. Our own leaders sometimes fail to meet our expectations and are often unafraid to ignore or offend larger groups for the presumed benefit smaller, more privileged groups. I believe that a massive societal overhaul could be required in order to produce a new form of intelligence that does not perpetuate existing societal power structures such as sexism, racism, homophobia and economic discrimination.

In the short-term, increased diversity in STEM will help to slowly mitigate problems present in AI. Exposing face-finder technology to more people of color during the development process will prevent particularly offensive errors. Limiting the amount that AI systems make decisions for law enforcement will help to avoid assumptions about recidivism and criminality that lead to deaths. However, these issues ultimately serve to slap a band-aid on larger human issues that are only becoming visible to the most privileged members of society as they attempt to create almost-human minds of their own.

Canaan Delgado is a junior in the College of Agriculture and Life Sciences. He can be reached at [email protected]No Church in the Wild appears every other Tuesday this semester.