Clark Hodgin / New York Times

Artificial intelligence technologies have diverse applications in veterinary medicine.

April 24, 2024

Cornell Professors Share Insight into AI in Veterinary Medicine

Print More

Since the introduction of ChatGPT in November, 2022, the term “artificial intelligence” has been thrown around with increasing frequency in everyday speech, particularly on university campuses. AI is a technology that allows machines to simulate human intelligence. When many people think of AI, they imagine movies  like Ex Machina or The Terminator. However, beyond providing existential storylines to sci-fi movies, AI also has the potential to innovate veterinary medicine.

According to Prof. Parminder Basran, clinical sciences, a radiation oncology physicist and director of the Veterinary AI in Diagnostic Imaging and Radiotherapy Lab, AI is already being used in veterinary medicine for specific purposes.

“When it comes to artificial intelligence in veterinary medicine, we’re talking about mostly narrow-use AI, or what’s referred to as ‘weak AI’ where AI solutions are being developed to tackle very specific problems,” Basran said.

An example of practical narrow-use AI is automated diagnosis and disease classification. A diagnostic x-ray image is input to the system, and a disease classification is output. While this form of AI can be useful in clinical settings, it is narrow in the sense that it deals with only one specialty and has only one output. 

Conversely, for working on a larger scale with multiple inputs, general AI is available.

“There’s this scope of general AI or strong AI,” Basran said. “You may be thinking about different kinds of inputs that are associated with a diagnostic or decision support, things that might be merging [lots of] data [with] image data to develop diagnoses or care plans.”

Because diagnoses may require considering a variety of data, the applications of AI in medicine are currently limited. While AI chatbots handle only language data, AI used for medicine need to input text, medical imaging and genomic data. Additionally, experts in veterinary medicine tend to have domain-specific knowledge from years of experience, information that is not always available online. Consequently, it is unclear if these veterinary AI models can work reliably and at the same level as human professionals. 

Jennifer Sun, an incoming assistant professor in the Computer Science Department, believes that benchmarks are important to avoid issues of missing information or bias.

“To ensure these models benefit veterinary medicine, I think we need to design benchmarks together with people in the veterinary community that have domain knowledge to make sure that whatever performance is on the benchmark actually reflects the real-world performance in those tasks, and also the benchmark to be diverse enough to cover all cases, not just cats and dogs,” Sun said.

Over the past few years, AI in healthcare has advanced at a rapid pace. For example, machine learning — a subset of AI that confers human learning capacities to machines — has become a fundamental tool for biological exploration, especially for drug discovery. This technology can be applied to develop better drugs in both human and veterinary medicine. 

Additionally, the emerging field of personalized medicine may make its way to veterinary medicine soon. In personalized medicine, different datasets are integrated with multimodal machine learning to characterize individual diseases. Targeted treatment strategies can be created based on people’s genetic profile.

“There’s a wonderful opportunity here to increase the diagnostic sensitivity and specificity [in veterinary medicine] with the goal of also improving patient outcomes. That whole field [of personalized medicine] will be emerging,” Basran said.

The rapid advancements of AI have also ignited questions on if human positions in healthcare will ever be replaced. Sun believes that while AI may cause a shift in healthcare by complementing healthcare providers, it will improve efficiency and provide support for decisions, streamlining the healthcare system without replacing humans.

“Humans have agency, the desire to improve the wellbeing of other humans and animals,” Sun said. “Ultimately, I see AI as a tool for humans to accelerate how we do diagnosis and treatment.”

Moreover, trust must be built before AI systems can take a greater role in healthcare.

“You’re not going to blindly use Tesla autopilot without ever having to test it,” Basran said. “Every tool has a range of usefulness. Until one really understands the range of usefulness of any tool, it’s hard to have trust in that tool. It’s going to take time for us to be in a position to understand what the range of usefulness is for a lot of these artificial intelligence tools.”

While the increased efficiency and clearer decision-making that AI can provide is exciting, AI can still make errors, introducing a host of ethical issues. 

If a veterinarian used an AI system to tell an owner that their pet has two months to live, the owner has a range of options that include euthanasia — the practice of ending a patient’s life to reduce suffering. Who would be liable if the AI system had made an error? Unlike in human medicine, the software and AI technologies used in veterinary medicine are not required to go through a regulatory process.

“The idea of using AI in veterinary medicine is more of a wild west in the sense that there’s no clear regulatory framework,” Basran said. “We as a community need to come together and establish what we think are good, safe technologies.”

To provide regulations for responsible AI use, Sun believes that lawmakers should get involved.

“Ultimately, we’re scientists, and I’m sure lawmakers or other policymakers have had previous experience and expertise in this area. It’d be good to have their input,” Sun said.

Shaan Mehta can be reached at [email protected].