February 28, 2021

CHEN | Could AI Really Take Over the World?

Print More

When posed with the buzzwords ‘Artificial Intelligence,’ we often think of Ultron infiltrating the Avengers or targeted ads brainwashing us into unwittingly spending hundreds of dollars. AI seems like a tactical network, able to simulate the human brain without any of the emotion, making it a smarter, unstoppable and incredibly cold version of us. But after taking  CS 4700: Foundations of Artificial Intelligence, I can confidently tell you that these world-overtaking robots are far from the present. 

Before taking the course, I shivered at the idea of my kitchen appliances transforming into robots and taking over the world, or Sophia, the human-like android who sits right in the middle of the uncanny valley. However, what uncovered in CS 4700 was a ton of numbers and equations. AI is essentially translating human decision-making into numbers — attributing positive numbers to rewards and goals, and vice versa. The only question left to solve is what choices need to be made to maximize the reward. This is all purely done quantifiably. To me now, artificial intelligence is just math.

These calculations attempt to slap positive numbers on actions that yield positive outcomes (and vice versa), but certain decisions can’t always be translated into white and black — or positive and negative values. Our sixth sense for emotion, morality and personality introduces a whole other dimension, something that computer scientists are still attempting to simulate today. In our day-to-day lives, there are thousands of nuances that affect our ability to make choices that computers just aren’t able to see yet. 

For example, Facebook has been trying to filter hate speech for the past few years, but some posts undoubtedly fall through the cracks. In the time that the company has spent training the machines to detect harmful messages, humans have been filtering through the posts one by one. Whether the programs are lacking the right data or that the algorithm for detecting hate content is simply incorrect, using AI to detect hate speech completely accurately still needs a human hand.

Some applications of AI even take on the biases of their programmers, making them flat-out harmful to use in practice. Take Amazon’s recently ditched recruiting tool. Designed to streamline the resume scanning process, this AI ended up reflecting the male bias found in the tech industry. It penalized resumes that included any words alluding to the candidate being female. While Amazon attempted to edit the program to avoid bias, there were still doubts about how fairly it treated women applicants. As John Jersin, vice president of LinkedIn Talent Solutions stated, “I certainly would not trust any AI system today to make a hiring decision on its own … the technology is just not ready yet.”

There are clearly some failures — but what about the successes of artificial intelligence that we hear about every once in a while? Sometimes the up and coming applications of AI described in news articles can be jarring, like when Target exposed a girl’s pregnancy before even her own father knew. However, this potential pregnancy was concluded through public information like the purchases of unscented lotion and supplements, which could just as easily be found on a series of receipts. This isn’t a creepy algorithm or some mystical foresight — any OBGYN could have looked at her list of purchases and come to the same conclusion. AI’s objective view of human decision making is still working to emulate our emotion, or other aspects of our world that just can’t be translated to pure numbers. 

AI radically changing our day-to-day lives will only be possible far from now — and any program that is truly autonomous and omniscient is even further. Films like Smart House and Chappie explore exaggerated hypotheticals of, “if technology could become self-aware and truly emulate humans,” but we’re far from overprotective maternal houses and raising robots from baby minds. Today, we’re only in the era of breaking down data and predicting. We probably won’t have automated resume scanners or Ultron anytime soon, and for that, I’m thankful.

Jonna Chen is a sophomore in the College of Engineering. She can be reached at [email protected]. jonna.write() runs every other Wednesday this semester.