What A.I. Is
Aided by the popularity of modern sci-fi, the specter of evil, autonomous robots dominating humanity is an oft-touted concern over artificial intelligence. At the very least — even if T-1000s don’t end up hunting us — intelligent machines are expected to displace millions of workers into low-wage menial work or idleness. But what is A.I. truly, and does it necessitate these concerns?
Broadly, A.I. describes “machines that can perform tasks that we typically think of as cerebral tasks, as human tasks,” said Prof. Christopher De Sa, computing and information science. With such a broad definition, A.I. has a tendency to excite: conjuring a future of automated hospitals and robot police, but the reality might be less fascinating — a simple program trained to distinguish between cat and dog photos may be considered A.I.
Prof. Fei Wang, who specializes in the development of data mining algorithms at Weill Cornell Medical College, described an important distinction:“There are two types of artificial intelligence. One, people call general A.I. — that’s the robot A.I. that can do everything humans can do. The other is narrow A.I., which describes a machine being good at a specific task: a robot doing surgery, or image analysis.”
Narrow A.I. is ubiquitous, Wang said. It cranks out Google search results, makes medical diagnoses and produces Amazon’s crafty ad recommendations. While Wang did not discount the possibility of general A.I., he said that “there is a long, long way to go” for an omnipotent machine.
What A.I. Can Do
In terms of the gap between the hype around A.I. and the reality, “the difference is in generalization, the ability to take some knowledge in one setting and apply it in another,” said Prof. Kavita Bala, computer and information science.
For example, an algorithm trained only on rottweilers and chihuahuas might not identify a golden retriever as a dog — a task humans can easily accomplish with minimal training, explained De Sa. For this reason, current A.I. cannot be considered intelligent and would not for a while, according to Bala.
“Everybody’s allegedly doing A.I.,” Bala said. “But right now, a lot of things that are called A.I. are really data-driven machine learning techniques.”
These data-driven machine learning methods already inform many aspects of our lives: They chuck emails from Nigerian princes into our spam folders and suggest horror movies for us to watch on Netflix; moreover, they have worked their way into healthcare, finance, marketing and almost every other industry that produces value from data.
At Cornell, Bala researches computer vision and graphics to recreate material and shape recognition in machines; Wang is harnessing A.I. to analyze health and genomic data to improve patient outcomes and healthcare efficiency. At the same time, A.I. even has a place in the arts: Prof. Jenny Sabin, architecture, recently created an art installation at Microsoft Research that combines A.I. and art in a stunning interactive fiber pavilion.
Just because current A.I. has not yet lived up to our phantasms does not mean the technology is not stunning and far-reaching, Bala said.
“There’s an amazing ecosystem of tools that are going to be available to us as consumers, that will make our lives better, more productive; it’s going to be a blast,” Bala said. “But it’s not what we traditionally think of as A.I.”
What’s Next for A.I.
All of the three A.I. researchers interviewed by The Sun cited data privacy and bias as the biggest challenges facing the future of the technology. As robust data-driven A.I. depends on fields of personal data, private information is a more valuable commodity than ever before. Companies not only collect swaths of user information, but they also have large-scale markets to sell them in.
According to De Sa and Wang, while data collection is often harmless (and often useful, such as in the case of shopping recommendations), it can be dangerous if not treated with due diligence.
De Sa gave a hypothetical example of privacy risks in medical data: In a small-scale, anonymized medical data set, “if someone wanted to find the medical history of an individual, they might just have to search for some number of unique characteristics until only one person comes up in the query. From there, personal medical data may be available from this ‘anonymous’ data set.”
In criminal justice, machine learning techniques are only as meaningful as the data they feed on, and any biases that already exist in our data will be amplified in its application.
“An algorithm made to predict recidivism rates ended up making racially biased predictions, perhaps because of such existing bias in our criminal justice system,” De Sa said, pointing to research showing overprediction of recidivism for black defendants.
According to Bala, modern A.I. is capable of amazing feats, but likely won’t produce anytime soon the kinds of general A.I. seen in movies like The Terminator. That kind of A.I. is “very, very far out, if at all it will ever be achieved,” Bala said.
Concerns about dangerous, all-capable general A.I. may be unfounded for now, but what about narrow A.I. that may replace repetitive or unintellectual work?
De Sa recognized A.I.’s capacity to replace jobs in sectors especially vulnerable to automation. “Autonomous vehicles will likely replace truck driving, one of the most common high-paying jobs in the US,” De Sa said.
However, Wang is more optimistic about A.I. in his industry, the field of medicine. “Doctors who are on hectic schedules will have more time to talk to their patients,” Wang said. “A.I. will likely alleviate many inefficiencies in the field and give patients and physicians more time and options.”
“I see [A.I] as incredibly valuable as an assistant to make us more productive and help us achieve our goals; I don’t see it as taking over, where we put our faith in it rather than our interactions with people,” Bala said.