May 9, 2018

GUEST ROOM | Unlearning Machine Bias

Print More

It’s July 17, 2014, and as Eric Garner is killed by the police, his final words are, “I can’t breathe.”

It’s April 12, 2018, and a barista calls the cops on two black men waiting patiently for a friend in a Starbucks.

It’s August 4, 2025, and the Chicago Police Department, now relying heavily on facial recognition artificial intelligence software, wrongly identifies and arrests Barack Obama.

While that last example may be a hypothetical, we’ve already seen the damaging ramifications of biased A.I. technology. Courts in Broward County, Florida, currently use risk assessment A.I. to predict whether the defendant of a petty crime is likely to commit more serious crimes in the future. This software wrongly labels black defendants almost twice as often as it does white defendants. From facial recognition to algorithms that map out which neighborhoods are likely to be the “dangerous,” bias in A.I. is overwhelming and under-regulated.

Just ask Joy Buolamwini, a graduate student at the MIT Media Lab who studies bias in facial recognition software. Her work quantifies the rate at which these programs fail in identifying gender. And it looks like these programs are most accurate for one particular group: white males, with a 99 percent success rate. Lighter-skinned women, however, are misidentified up to 7 percent of the time. Darker-skinned men are incorrectly identified up to 12 percent of the time, while darker-skinned women experience up to 35 percent error in facial recognition.

Can’t we just reprogram the code? It’s not as easy as it sounds. A.I. is trained with large sets of input data. If the data that goes in is already biased, no matter how subtle, the A.I. will learn that bias.

But why are these data sets biased? Blame us, the humans. Whether we’re aware of it or not, prejudices and biases affect our society. Our views of the world are shaped by the people around us and what we see in the media. When a predominantly white male workforce trains facial recognition algorithms, they naturally will use the images they know, and unintentionally forget the images they don’t. Our biases never escape us.

We could rely on the large tech corporations to fix these issues. And indeed, Facebook, Google and Microsoft have all made public statements…when their software failed. These companies are under-regulated and have a strong desire to keep their algorithms a secret, preventing meaningful dialogue. Treating A.I. as a black box and hoping for the best has already been shown to be naive at best and grossly negligent at worst.

So, let’s get ahead of the problem. Let’s bolster educational efforts right here at Cornell. Let’s expand the recent initiative, A.I. Policy and Practice, with workshops, conferences and even course requirements. Let’s partner faculty in an array of fields — from sociology to computer engineering, from business to law — to understand this complex problem from various points of view.

We call on faculty members and students alike to make this issue a priority. Put in the work now to avoid life-altering consequences later: in any course, seminar or workshop that remotely touches on the development of A.I. technology, consider the biases it may have, and what the impacts might be.

It’s May 9, 2018, and it’s time to build a better generation of coders, a generation that values diversity. The future is now; let’s make sure the present is ready.

Alicia Cintora, Kevin McDermott and Andy Sanchez are graduate students at Cornell University. Guest Room runs periodically. Comments can be sent to [email protected].