Bard will be a streamlined version of the Google search engine that uses video, audio and text to answer user questions. Starting March 21, Bard began compiling a waitlist that will grant access to users in the United States and the United Kingdom on a rolling basis.
However helpful, some educational institutions have felt compelled to address the use of AI for its violation of academic integrity. Despite the concerns, some Cornell professors have an optimistic approach to artificial intelligence and believe it can be leveraged in an ethical way to maximize its benefits.
Prof. Sahara Byrne, communication, who is the senior associate dean of the department, reflected on ways faculty has already initiated efforts to incorporate artificial intelligence into their curriculum.
“They want to help students learn how to both ethically integrate AI services into their work and outperform these services,” Byrne said. “Some instructors have asked students to complete a project through the use of AI and then attempt to outperform it.”
Byrne has even observed artificial intelligence being implemented into faculty work as some instructors have experimented with the software to grade papers and compare the results to their own grading.
Byrne said she does not see faculty adjustments to artificial intelligence as a potential issue and said she relies on students to use artificial intelligence responsibly.
“Our faculty will figure out how best to integrate AI within a short period of time,” Byrne said. “It will always be up to the student to get the most out of their educational experience by engaging with honesty and with a willingness to authentically learn and grow.”
Prof. Joe Halpern, computer science, shares Byrne’s positive mindset regarding the use of artificial intelligence for educational purposes.
“I don’t have an intrinsic problem with it,” Halpern said. “I can also imagine that [students] would learn by interacting with a large language model like ChatGPT or Bard.”
However, Halpern believes faculty will have to address the use of artificial intelligence and its potential further advancements.
“This is an issue that will take us a few years to work out, and may in some cases involve major changes to how we do teaching and evaluation,” Halpern said. “LLMs are very much a moving target.”
Cornell students have also been considering artificial intelligence implications, but they have conflicting opinions regarding its effects on their education.
Carleigh Roche ’23, an information science major, seems to share the optimism that Byrne and Halpern exhibited. She believes that artificial intelligence can have benefits and drawbacks that are important for the University to acknowledge.
Roche compares Bard to former technological advancements that seemed revolutionary at the time but are now commonplace, hoping to eliminate the fear that Bard may provoke.
“I feel like the role of search engines instead of libraries, a place from 20 years ago, might have been seen as this vast technological jump that might have gone too far, but in the end, it just made information more accessible to people,” Roche said. “So I think overall it is a beneficial thing.”
For academic integrity purposes, Roche feels the University should consider using online tools to deter students from misusing Bard.
“I’m not sure how those services work, but being able to detect that in students work in the future seems like something that the University might want to adapt,” Roche said.
On the other hand, Varsha Gande ’26 is a government major and finds ChatGPT helpful for summarizing assigned readings, but is fearful about rapid advances in artificial intelligence.
“ChatGPT [for some activities] is ethical, but AI for other things might not be,” Gande said. “Next thing you know our president is going to be an AI.”
Gande also expressed concern that Bard could be a gateway to harmful artificial intelligence advances.
“AI is scary — the fact that people won’t even have to learn anything and just get their answers from AI is terrible,” Gande said. “It makes me think that people like doctors in the future will be less credible.”
While the future of artificial intelligence usage at the University is still unknown, it is likely to continue to create debates among students and faculty.
“Students should always be pushing the boundaries of knowledge,” Byrne said. “Ethical and creative use of AI may be essential for their success in life after college.”
Sofia Principe is a Sun contributor and can be reached at [email protected].