Without the $200 per month subscription to ChatGPT pro, DeepSeek’s new generative AI model performs similar OpenAI services for free. This may further increase AI usage in academic settings, where concerns about academic integrity at Cornell have already grown.
DeepSeek, a start-up owned by High-Flyer, a Chinese stock trading firm, launched a new generative AI model called DeepSeek-V3 on Jan. 10. This system performs similar functions to Google and ChatGPT but uses a fraction of the computer chips — showing heightened technological capabilities. Additionally, it’s a fraction of the price. DeepSeek only incurred a cost of $6 million raw computing power, which is around one-tenth of the cost of Meta’s latest AI technology.
Cornell professors, such as Prof. Michael Clarkson ’04 Ph.D. ’10, computer science, have noted these differences of cost and system methods. Clarkson is a member of the dean of faculty's working group on academic integrity and the vice provost for academic innovation's generative AI in education working group.
“The big difference [between ChatGPT and DeepSeek] is really that DeepSeek was implemented using techniques that greatly reduced the cost of training and fine-tuning the knowledge built into the model,” Clarkson wrote in an email to The Sun.
These cost reductions stem from AI companies’ typical processes. Other AI chatbots like ChatGPT usually create a single neural network that learns patterns from internet data and travels between graphics processing unit chips. If there is overlap between different chips, the neural network functions to create connections between them.
Alternatively, DeepSeek split the system into multiple neural networks, splitting between “expert” systems and “generalist” systems. The “generalist” systems make connections between experts, but only when needed. Instead of transmitting large amounts of data between chips, their system reduces chip usage and lowers costs.
Another difference between DeepSeek and other generative AI models is that DeepSeek is directly controlled by the Chinese government.
Some Cornellians find this concerning, including Clarkson, who worried about the implications for Chinese international students.
“DeepSeek could find itself under pressure to gather information about the political leanings of Chinese students worldwide, or as a tool to spy on people working at major companies,” Clarkson wrote to the Sun.
He suggests avoiding usage of DeepSeek for personal computing functions for this reason, although students are already noticing such implications perpetuating the academic landscape.
Ziga Korvacic ’26, co-president of Artificial Intelligence Undergraduate Club at Cornell University, foresees a future where DeepSeek is highly prevalent and relevant in student academic settings.
“Both ChatGPT and DeepSeek have their own strengths, but I think the fact that DeepSeek offers all of its features for free is very attractive to a lot of students,” Kovacic wrote in an email to The Sun.
These updates and concerns parallel the University's active responses to the changing landscape following the introduction of ChatGPT, released in November 2022.
Cornell has expanded its University policies regarding confidentiality and privacy, research and accountability. These updated policies included requiring users to be held accountable for erroneous information due to usage of generative AI and prohibited the sharing of University information into public generative A.I. tools. Information that is confidential, proprietary or subject to federal or state regulations or considered sensitive is prohibited.
Additionally, in Spring 2023, the Cornell administration assembled a committee in order to develop guidelines and recommendations for generative AI usage in the context of education at Cornell.
The committee curated a Generative Artificial Intelligence for Education and Pedagogy report that acknowledges risks such as GAI tools used to circumvent learning and pose biases, inaccuracies and ethical problems.
Following the new introduction of DeepSeek, Prof. Ken Birman, computer science, envisions the University applying similar University-wide restrictions to DeepSeek.
“I don't see us restricting DeepSeek in any special way compared to other generative AIs, but I think we do need to make sure our academic integrity policies are crystal clear, and if a course specifies that people shouldn't get AI help, violations should be prosecuted under our campus academic integrity code,” Birman wrote in an email to The Sun.
Some Cornell professors allow any use of GAI, some allow it on a case-by-case basis and some completely disallow it for the class.
“Even within a single course, the allowed usage may vary across assignments and assessments. It will take time for norms to become stable,” Clarkson wrote to The Sun.
In applying these generative AI models in his own teaching, Clarkson allows his students to use GAI in Computer Science 3110: Data Structures and Functional Programming, a popular class he teaches, under the condition that students cite their use of it.
“I believe this better prepares students for real-world programming tasks, in which GAI is becoming a helpful productivity booster,” Clarkson wrote to The Sun.
Amidst GAI concerns, “we have to remind ourselves that people said the very same things about the Internet back in 2000, and then about Github, Wikipedia, Reddit and Quora, and beyond that there was the panic about Chegg and CourseHero,” Birman wrote in an email to The Sun.