The proper use of artificial intelligence in an academic setting continues to be a contentious topic among students and faculty. The University’s official stance on A.I. was outlined in an email sent to the student body on Wednesday, Sept. 27, which provided a series of official guidelines on the use of generative A.I. on projects from research to pedagogy.
These guidelines come after the University’s administration and faculty voiced concerns over the use of A.I. following the controversy generated by large-language model ChatGPT’s November 2022’s release. The email claims to provide the opportunity to explore the advancements in technology that artificial intelligence has to offer, while also limiting its influence on curriculum.
“Cornell’s preliminary guidelines seek to balance the exciting new possibilities offered by these tools with awareness of their limitations and the need for rigorous attention to accuracy, intellectual property, security, privacy and ethical issues,” Curtis L. Cole, Cornell’s vice president and chief global information officer, said in the email.
The email said Cornell had developed the new guidelines to comply with already existing University policies and range from guidance on accountability, confidentiality and privacy, education and pedagogy and research.
New policies for accountability when using A.I. tools require users be held accountable for erroneous information — generated by artificial intelligence — that could be published without checking, with the email asserting A.I. generated content can be misleading, outdated or false. Users are encouraged to verify information for errors and biases, in addition to checking for copyright infringement incidents.
The guidelines also prohibit entering University information that may be confidential, proprietary, subject to federal or state regulations or otherwise considered sensitive or restricted into public generative A.I. tools. These guidelines are cited as consistent with the University privacy statement.
“Any information you provide to public generative AI tools is considered public and may be stored and used by anyone else,” the email read.
Education guidelines are described as flexible and at an instructor’s discretion to prohibit, to allow with attribution or to encourage generative A.I. use. The guidelines also offers additional resources to students from the “CU Committee Report: Generative Artificial Intelligence for Education and Pedagogy” and those from the Center for Teaching Innovation.
The committee report, released in July 2023, provides recommendations for University policy regarding generative A.I. use in the classroom, including assistance for faculty to adapt new technology accommodations for new assignments. The report also recommends that instructors guide students to learn the value and limitations of generative A.I., since they are likely to encounter it in their future careers.
“Consequently, instructors now have the duty to instruct and guide students on ethical and productive uses of [generative A.I.] tools that will become increasingly common in their post-Cornell careers,” the report said.
The emailed guidelines also announced plans from the University to offer or recommend a set of generative A.I. tools by the end of 2023. The email claimed that the Administration is evaluating tools that meet the needs of students, faculty, staff and researchers, while providing sufficient risk, security and privacy protections.
Additionally, research and administrative uses for generative A.I. must comply with guidelines from forthcoming reports from the University committees for research and administration. The reports are set to publish by the end of 2023, according to the email.