Why We Need Stringent Evaluation of AI Systems in Veterinary Medicine
The Transparency Crisis That's Putting Practices and Patients at Risk
Here's an uncomfortable truth about veterinary AI: we're flying blind.
The veterinary AI market operates in a transparency vacuum. Companies routinely make bold performance claims—"95% accuracy!" "Clinically validated!"—while providing zero published evidence to support these assertions.
I've spent 29 years in veterinary diagnostics, 26 of them at IDEXX, and I can tell you this: the absence of published validation data in veterinary AI isn't an oversight—it's become the industry standard. And it's putting both practices and patients at risk.
This isn't about being anti-innovation. I am very much pro-innovation and pride myself on being on the leading edge. Many veterinary AI tools are genuinely useful and well-designed. But without transparent evaluation data, we have no way to distinguish legitimate breakthroughs from sophisticated marketing campaigns. Even worse, we're deploying AI systems in clinical settings without understanding their true capabilities, limitations, or appropriate use cases.
The solution isn't to avoid AI—it's to demand the same evidence-based standards that guide every other aspect of veterinary medicine.
The Veterinary AI Transparency Crisis
Let me illustrate the problem with a thought experiment. Imagine if pharmaceutical companies operated the same way as current AI vendors:
"Our new antibiotic is 95% effective! Veterinarians love it! FDA approval pending, but don't worry—we've done internal testing. Sorry, we can't share the study details due to proprietary concerns. Trust us, it works great!"
You'd never accept this for a new drug. Yet this is exactly how most veterinary AI tools enter the market.
The Evidence Desert
Recent comprehensive analysis of the veterinary AI validation landscape reveals a stark transparency crisis. While some companies have conducted validation studies, the vast majority provide no public information about their evaluation methods—whether in peer-reviewed publications, white papers, conference presentations, or any other format.
The Rare Exceptions: A handful of companies do provide validation transparency. SignalPET has published methodology and performance data for processing 50,000 radiographs weekly across 2,300 clinics, achieving 94.4% specificity versus 88.3% for human radiologists. Zoetis openly shares validation approaches for their Vetscan Imagyst platform, reporting performance comparisons with expert pathologists. ImpriMed provides clinical outcome data showing 3x longer survival and 4x higher drug response rates for dogs with relapsed B-cell lymphoma. Mars Petcare's RenalTech shares validation methodology for predicting chronic kidney disease up to two years early.
The Overwhelming Majority: Most veterinary AI tools provide zero public validation information:
No description of how they evaluated their systems
No performance metrics beyond marketing claims
No information about study design, datasets, or methodology
No discussion of limitations or failure modes
No post-market performance monitoring data
When Marketing Claims Replace Evidence
Without published validation data, veterinary AI marketing has become a creative writing exercise. Companies routinely make assertions without evidence:
"95% accuracy" (compared to what? measured how? on which cases?)
"Clinically validated" (by whom? using what criteria? where is the data?)
"Trusted by veterinarians" (how many? for how long? with what outcomes?)
The American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging's 2024 position statement declared "no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety." This professional assessment highlights that even when companies have conducted internal validation, they're not sharing enough information for practitioners to assess the quality or applicability of that validation.
Unlike other veterinary technologies where companies routinely share technical specifications and performance data, AI tools are marketed primarily on promise rather than evidence. Practitioners are expected to make purchasing and implementation decisions based on demonstrations, testimonials, and marketing claims rather than transparent validation data.
Why Evidence-Based AI Evaluation Is Critical
The current approach isn't just bad for veterinarians—it's ultimately bad for the AI companies themselves and dangerous for patients.
The Professional Standard We're Abandoning
Evidence-based medicine is the cornerstone of veterinary practice. When we accept AI tools without validation data, we're abandoning the same standards we apply to every other clinical decision.
We demand evidence for:
New pharmaceuticals before prescribing
Diagnostic tests before interpreting results
Surgical techniques before implementation
Nutritional recommendations before counseling clients
We should demand evidence for AI tools before:
Incorporating them into diagnostic workflows
Basing clinical decisions on their outputs
Billing clients for AI-assisted services
Training staff to rely on their recommendations
The Patient Safety Imperative
Unvalidated AI tools pose real risks to patient care:
Diagnostic Errors: AI systems might miss subtle findings or generate false positives that lead to inappropriate treatment.
False Confidence: Practitioners might over-rely on AI recommendations without appropriate clinical skepticism.
Workflow Disruption: Poorly performing AI can slow down rather than accelerate clinical processes.
Resource Misallocation: Investing in ineffective AI tools diverts resources from proven diagnostic approaches.
The Economic Reality
AI tools represent significant practice investments—often requiring substantial upfront licensing fees, ongoing subscription costs, staff training time, workflow modification, and technical support. Without validation data, practices are making these investments blind. This isn't just poor financial stewardship—it's incompatible with responsible practice management.
The Expanding Transparency Gap
The validation crisis extends beyond traditional diagnostic AI to rapidly proliferating practice management tools powered by large language models. Veterinary practices are increasingly adopting AI systems for documentation, client communication, and administrative tasks—all without published evaluation data.
These tools may not directly impact patient diagnosis, but they affect medical records, client communications, and practice workflows. Without validation studies, practices don't know the accuracy rates, error patterns, or appropriate use cases for these systems. We're implementing tools that handle sensitive medical information and client interactions based entirely on vendor promises.
Whether we're discussing diagnostic AI or practice management tools, the fundamental principle remains the same: veterinary practices deserve evidence-based information about the tools they're implementing.
Why Transparency Benefits Everyone
Companies that publish rigorous validation studies gain significant competitive advantages:
Market Differentiation: In a sea of unsubstantiated claims, published evidence makes products stand out immediately.
Professional Credibility: Evidence-based practitioners adopt validated tools more quickly than unproven alternatives.
Premium Pricing: Practitioners will pay more for tools with demonstrated effectiveness versus those with only marketing claims.
Industry Standards: Transparency leaders set the standards that competitors must eventually match.
The responsibility isn't solely on companies—customers must actively demand validation data. When practitioners consistently ask "Where can I read the validation study?" vendors will respond with evidence rather than marketing materials.
Addressing the Challenges of Veterinary AI Validation
Acknowledging the need for transparency doesn't ignore the real challenges of veterinary AI validation. These tools face unique obstacles that human medical AI often doesn't encounter:
Multi-Species Performance: AI tools must work across dogs, cats, and exotic species with different anatomy and disease patterns.
Data Scarcity: Smaller patient populations and fragmented practice data make large-scale studies challenging.
Economic Constraints: The veterinary market may not support the same validation investment levels as human medicine.
Ground Truth Complexity: Veterinary diagnosis often lacks the definitive outcomes data that human medical AI can access.
These challenges are real but not insurmountable. They explain why veterinary AI validation is difficult—they don't excuse the absence of any validation data. Companies serious about veterinary medicine find ways to conduct rigorous studies within these constraints, as demonstrated by the transparent leaders in the field.
What's Coming Next
Understanding that we need validation data is only the first step. In upcoming posts, I'll dive deep into how AI tools should be evaluated—the methodologies, metrics, and study designs that separate rigorous validation from sophisticated marketing. Whether companies are reporting sensitivity and specificity for diagnostic tools or accuracy rates for documentation systems, you'll know what questions to ask and what standards to expect. The goal is to arm you with the knowledge to evaluate AI validation studies just as critically as you would evaluate any other clinical research.
Key Insights for Veterinary Practice
🚫 Reject unsubstantiated marketing claims: No matter how impressive the promises, don't deploy AI tools without published validation evidence from independent sources.
📊 Demand transparency from vendors: Before purchasing AI tools, require detailed validation data including methodology, performance metrics, limitations, and failure modes.
🔍 Look for independent evidence: Studies performed by independent groups provide more reliable information than company white papers or marketing materials.
⚖️ Understand validation limitations: Even published studies may have limitations—assess whether study populations and settings match your practice reality.
🎯 Start with pilot programs: When validation data is limited, implement AI tools on a trial basis with careful monitoring of real-world performance.
🔄 Monitor ongoing performance: Track AI tool performance in your practice to detect degradation or inappropriate use patterns.
👥 Share experiences professionally: Contribute to the professional knowledge base by sharing both positive and negative experiences with AI tools.
📚 Support industry standards: Advocate for professional organizations to establish validation requirements and accreditation programs.
💼 Calculate true ROI: Factor validation quality into purchasing decisions—well-validated tools are more likely to deliver promised benefits.
🔬 Ask the critical question: When vendors demo their AI tools, ask: "Where can I read the validation study?" Their response will tell you everything you need to know.
Conclusion
The veterinary AI transparency crisis isn't sustainable. As these tools become integral to practice workflows, the absence of validation data becomes increasingly dangerous for patients, practitioners, and the profession.
But this crisis also represents an opportunity. Companies that embrace transparency and rigorous validation will gain competitive advantages in an increasingly crowded market. Practitioners who demand evidence will make better purchasing decisions and achieve better patient outcomes.
The path forward requires collaboration between companies, practitioners, academic institutions, and professional organizations. We need validation standards appropriate for veterinary medicine's unique challenges and a professional culture that demands evidence-based AI adoption.
This isn't about creating barriers to innovation—it's about ensuring that innovation actually improves veterinary care. The same evidence-based principles that have advanced veterinary medicine for decades must guide our adoption of AI technologies.
We have a choice: continue accepting unvalidated AI tools and hope for the best, or demand the transparency and evidence that will ensure AI truly serves veterinary medicine's mission. The decision is ours, but our patients and clients deserve better than hope and marketing promises.
They deserve evidence.
What validation questions have you asked AI vendors? What responses did you get? Reply to this post and share your experiences—building a database of vendor transparency (or lack thereof) helps the entire profession make better decisions.





Really interesting and I totally agree.
You should submit for a talk at AVMA next year and if you want we could work on putting a panel together.
Next... I use an Eko to listen (but honestly... mostly because the sound amplifier is awesome) CoVet for scribing and an Imagyst and I have a REALLY dumb question...
Are people not using their eyeballs and verifying themselves?
To wit...
My Eko pics up a murmur in an 8yo dog
I send them home on PVPs and have them come back and I listen in the back where it is quiet
If I hear it again and my Eko picks it up I let them know that if we have anesthesia in the future then we need to do a cardio work up (3v chest rads, BP, echo and BW if we don't already have it).
No clinical signs? No anesthesia coming up? Note it in the record and register q6m.
I cannot imagine a scenario in my doctoring where the AI makes the final call.