The Faculty Senate discussed a recent University report on the future of generative artificial intelligence models and their impact on students and faculty.
In the one-on-one 45-minute interview filmed at Cornell in Washington, Bill Nye and Levin touched on everything under the sun, from GOP climate change deniers including Vivek Ramaswamy to Carl Sagan’s role in planning the Voyager 2 spacecraft to ultimate frisbee to the intriguing mechanics of clockwork.
The reviews are in. Professors really hate ChatGPT.
During syllabus week, I asked my friends what their professors were saying about ChatGPT, and the vibes were decidedly bad. “A word to the wise: DO NOT BE TEMPTED by Open AI platforms such as ChatGPT,” reads the syllabus of Archaeology and the Bible. “Do not rely on ChatGPT to complete this assignment,” said an Introduction to Global Health assignment description.
Chat AI, recently released in November 2022, is a large language model that provides highly detailed responses to almost any prompt of your imagination (given that it’s appropriate). Once I got past the initial disbelief of Chat AI’s capabilities, I quickly became fascinated. A couple of minutes of playing with it made me realize that Chat AI is no Siri.
The SciFi Lab at Cornell has made a breakthrough by creating a miniature wristband camera, BodyTrak, which could play a role in changing the way self-tracking devices such as smartwatches are built in the future.
The Artificial Intelligence of Hollywood has gotten everyone worried about a future run by robots. However, there are many aspects of our day-to-day interactions that AI is, as of yet, incapable of mirroring. Emotion, morality and personality are things not easily reduced to the positive and negative values of code. We’re safe for now.
“A lot of these algorithms that have been developed, whether they are in healthcare or policing or visual recognition, are basically including the biases of the people who develop them,” Okolo said.
It’s July 17, 2014, and as Eric Garner is killed by the police, his final words are, “I can’t breathe.”
It’s April 12, 2018, and a barista calls the cops on two black men waiting patiently for a friend in a Starbucks. It’s August 4, 2025, and the Chicago Police Department, now relying heavily on facial recognition artificial intelligence software, wrongly identifies and arrests Barack Obama. While that last example may be a hypothetical, we’ve already seen the damaging ramifications of biased A.I. technology. Courts in Broward County, Florida, currently use risk assessment A.I. to predict whether the defendant of a petty crime is likely to commit more serious crimes in the future. This software wrongly labels black defendants almost twice as often as it does white defendants.