First in A Series on Goodnewsforpets.com: AI in Veterinary Medicine
Editor’s Note: Jonathan Lustgarten, MS, PhD, VMD, CSPO, is the Director of AI and Machine Learning for Mars Veterinary Health and a past president of the Association for Veterinary Informatics. He lectures frequently at veterinary meetings and is a leading expert in the use of AI in veterinary medicine. Goodnewsforpets.com Editor and Publisher Lea-Ann Germinder sat down with him at VMX 2025 to discuss AI’s impact, challenges, and future in veterinary medicine. With a focus on responsible AI implementation, Dr. Lustgarten provides insights into where the technology is headed and what veterinarians should consider as AI tools continue to evolve to improve pet healthcare and veterinary medicine.
Dr. Lustgarten, AI has been a major topic in veterinary medicine. What do you think is the biggest challenge in properly implementing AI?
The biggest challenge isn’t necessarily just the technology—it’s the operational aspect of integrating AI correctly. Veterinarians are already juggling a lot, and while AI can make tasks faster in theory, it often requires an upfront investment of time and effort. There’s also a misconception that AI always improves efficiency, but when generative AI misses the mark, it can take longer to correct errors than if a veterinarian had done it manually.
For example, voice-to-text applications are gaining popularity, but they still pose accuracy issues. The real test of efficiency isn’t just whether AI saves time, but whether it produces accurate, reliable results consistently. AI also needs to be transparent so veterinarians can understand how AI produced the output.
Is generative AI the only AI used in the veterinary clinic or are there other types of AI used?
Generative AI tools such as ChatGPT is not the only tool in the veterinary clinic, but it often overshadows other forms of AI. There’s a lot of valuable AI technology in the veterinary clinic that isn’t generative that is used in diagnostics for example.
Non-generative AI, that is AI that is used to predict discrete states or things, has the distinct advantage of easily identifying (and correcting) if it is wrong. You know if it predicts the pet will get renal disease, and it does not, that the AI was incorrect. Generative AI is more nuanced as you can have an output that is close but may not capture the full story or complexity of what was said or done. Is this then a “good” or “correct” output is much harder to evaluate and improve on except by the tincture of time.
Where do you see AI making the biggest impact right now in veterinary medicine?
Right now, we’re seeing the most progress in administrative AI—things like appointment scheduling, answering basic pet health questions, and prescription management. These areas benefit from automation because they’re routine and structured. However, when it comes to direct medical decision-making, we’re further away from AI playing a dominant role. One of the biggest barriers is that veterinary medicine lacks the volume of structured data that human medicine has and the money to make the data structured when it is not, making AI training more challenging.
AI hallucinations, or incorrect outputs, have been a big concern. How do they affect veterinary applications?
This is a critical issue. AI systems sometimes fabricate information with great confidence, which can be dangerous in a medical setting. We’ve seen cases where AI systems in human medicine made up patient histories, and that kind of risk extends to veterinary AI. If veterinarians don’t double-check AI-generated content, errors could easily make their way into medical records, potentially leading to incorrect treatments. That’s why education and responsible AI practices are so important.
What about data privacy? How secure are these AI systems for veterinary practices?
Security is a big concern, especially when it comes to how AI companies handle data. Many AI providers claim to be HIPAA-compliant, but that mainly refers to access control, not necessarily to how they use the data internally. Some agreements allow AI companies to use uploaded data to improve their models. Veterinarians need to be aware of what they’re agreeing to when they use these systems and avoid entering personally identifiable information (PII) into free AI platforms.
What advice would you give veterinarians looking to evaluate AI tools for their practice?
First, always test AI tools yourself before committing to them. Don’t just rely on a demo video—request a trial period and use the AI on complex cases, not just simple ones. If an AI company cannot provide examples of mistakes their system has made and how they corrected them, that’s a red flag. Every AI system will make errors, and transparency about those errors is key.
AI regulation in veterinary medicine is still evolving. How does this compare to human medicine?
Human medicine has far more established frameworks for AI oversight, with professionals trained in biomedical informatics and regulatory pathways for AI-powered medical devices. Veterinary medicine doesn’t yet have equivalent structures, so we’re seeing a lot of AI tools being introduced without formal vetting. That can lead to problems down the line, as regulations will eventually catch up. My concern is that if AI adoption happens too quickly without proper oversight, we could see significant failures that impact patient care, and it can detract from or stop future development, which is akin to what happened to machine learning in human medicine in the early ‘00s.
Given the rapid changes in AI, where do you think we’ll be in veterinary medicine in five to ten years?
I believe we’ll see a shift toward AI as an assistive technology rather than a replacement for human expertise. Think of AI in cars—people trust lane-keeping assist, but they don’t want to give up control completely. The same will be true in veterinary medicine. AI will be an invaluable assistant, helping veterinarians recall medical histories, organize data, and improve workflow, but it won’t replace human decision-making.
We must ask your view on the debate on when we will see artificial general intelligence (AGI)?
In terms of artificial general intelligence (AGI), I’m skeptical that we’ll see anything close to human-like intelligence soon. AI excels at pattern recognition, but true intelligence requires reasoning, emotion, and creativity—things that are incredibly difficult to program. What we will see is highly specialized AI that’s exceptionally good at specific tasks but not an all-knowing system that can replace veterinarians. I do think we will have to differentiate between AI intelligence and human intelligence. I believe those two will divide as the technology progresses.
Finally, what should veterinarians do now to prepare for AI’s role in their profession?
Stay informed and engage with AI thoughtfully. Veterinarians should educate themselves on AI’s capabilities and limitations, advocate for responsible AI policies, and demand transparency from AI providers. As AI becomes more prevalent, it will be crucial to balance innovation with ethical considerations to ensure these technologies truly benefit both veterinarians and their patients.
Thank you, Dr. Lustgarten, for your expert insights on AI in veterinary medicine. Stay tuned for more interviews with leading experts on AI and veterinary medicine to improve pet healthcare!
Disclosure: Lea-Ann Germinder conducted this interview in person with Dr. Lustgarten. AI tools were used to record, transcribe, and edit, with Lea-Ann Germinder performing the final oversight and Dr. Lustgarten reviewing for accuracy.