The Three Layers of Veterinary Software Interoperability: Why Your AI Tools Can't Talk to Each Other
Understanding the Connection, Structure, and Semantic Barriers That Fragment Veterinary Practice Technology
After 29 years in the veterinary industry, I've witnessed countless attempts to make veterinary software systems work together. The results are mixed: we've achieved impressive integration in some areas while completely failing in others. But as AI tools proliferate across veterinary medicine—from diagnostic imaging systems to practice management software to laboratory analyzers—understanding what works, what doesn't, and why has never been more critical.
Here's why: Every AI system in your practice is only as good as the data it can access. When your diagnostic imaging AI can't share findings with your practice management system, when your laboratory results require manual re-entry into patient records, when client communications exist in isolated silos, you're not just losing efficiency—you're limiting the potential of every intelligent system you adopt.
The challenge isn't technological sophistication. We've proven we can build complex AI systems that analyze radiographs, interpret bloodwork, and even generate clinical notes. We've also proven we can connect different veterinary software systems—your practice management system receives lab results automatically, imaging systems integrate with PACS, and various third-party tools can access patient data.
The problem is more fundamental: while we've solved the technical challenges of making systems communicate, we've done it through expensive, proprietary solutions that don't scale, and we've completely failed to agree on what veterinary data actually means.
This fragmentation has broader implications beyond just technical inconvenience. As Jason DeFrancesco of VistaVet recently argued in "Veterinary Medicine Needs a Nervous System," the lack of interoperability prevents veterinary medicine from functioning as a coordinated system capable of learning from collective experience and responding to emerging challenges. When our practice management systems, diagnostic labs, and clinical tools can't share information meaningfully, we lose the ability to identify trends, improve outcomes systematically, and leverage the full potential of our collective clinical knowledge.
But there's hope. By understanding how interoperability actually works—and learning from both the partial successes and remaining failures in veterinary medicine—we can build the foundation for truly integrated veterinary practice management that leverages AI across all systems seamlessly.
The Three Layers of Interoperability: From Connections to Meaning
When most people think about making software systems work together, they imagine it's primarily a technical challenge—getting System A to send data to System B. But successful interoperability requires coordination across three distinct layers, each building on the previous one.
Layer 1: The Connection Layer - How Systems Talk
At the foundation, systems need to establish basic communication. This is like deciding whether to send information by email, fax, or carrier pigeon—the method matters, but it's just the transport mechanism.
In veterinary practice, you see this variety constantly:
File dumps: Your lab analyzer exports results to a CSV file that someone manually imports into your practice management system
Direct database access: Client communication system writes directly to your PIMS database
Push APIs: Your diagnostic lab pushes results to your practice management system when they're ready
Pull APIs: Your practice management system periodically checks for new results from various sources
Each approach has trade-offs. File dumps are simple but require manual intervention. APIs are elegant but require both systems to support them. Direct database access is fast but risky.
The key insight: the connection method isn't what determines success or failure. Plenty of veterinary integration projects have failed even with sophisticated API connections. The real challenges lie in the upper layers.
Layer 2: The Structural Layer - Agreeing on Data Format
Once systems can exchange information, they need to agree on how that information is organized. This is like deciding whether to write a letter in paragraph form, bullet points, or a formal business letter template—everyone needs to understand the structure to interpret the content correctly.
Veterinary software uses numerous structural formats:
CSV files: Simple but limited—difficult to represent complex hierarchical data
JSON: Flexible and widely supported, but requires careful schema design
XML: Powerful for complex data structures but verbose and harder to work with
HL7: The healthcare standard for clinical data exchange (rarely used in veterinary medicine)
DICOM: The imaging standard that actually works across veterinary systems
The structural layer is where many veterinary interoperability projects stumble. Two systems might both export "patient data," but if one uses separate fields for "patient_name_first" and "patient_name_last" while the other uses "patient_full_name," automated integration becomes impossible.
Layer 3: The Semantic Layer - What Things Actually Mean
The most complex layer involves agreeing on what things are called and how they're coded. Even if two systems can exchange data in the same format, they need to agree on terminology and meaning.
This is where veterinary medicine faces its biggest challenge. Consider how many ways practices might record the same condition:
"Vomiting" vs. "Emesis" vs. "V+" vs. "Gastric emptying disorder"
"IMHA" vs. "Immune-mediated hemolytic anemia" vs. "Autoimmune hemolytic anemia"
"Heartworm positive" vs. "HW+" vs. "Dirofilaria immitis infection"
Human medicine solved this through standard terminologies like SNOMED-CT and ICD-10. When a human hospital system records a diagnosis, it uses standardized codes that any other system can interpret correctly. Veterinary medicine has no widely adopted equivalent. [I explored how SNOMED-CT's veterinary extension could solve this problem in detail in a previous post about standardized terminology.]"
This semantic chaos means that even when systems can exchange data successfully, that data often can't be meaningfully integrated or analyzed across systems.
The Partial Success Stories: Connection and Structure Without Standards
Before examining the complete failures, let's understand what partial success looks like. Veterinary medicine has actually achieved limited interoperability in several areas—but always through expensive, proprietary solutions that create new problems even as they solve old ones.
DICOM: The One True Standard
In veterinary diagnostic imaging, we have genuine interoperability success: DICOM (Digital Imaging and Communications in Medicine). Walk into almost any veterinary practice with digital radiography, and you'll find something remarkable: the X-ray machine from Vendor A talks seamlessly to the PACS system from Vendor B, which displays images perfectly in the practice management system from Vendor C. This just works, across species, practice types, and vendor combinations.
DICOM succeeds because it addresses all three interoperability layers comprehensively:
Connection Layer: DICOM defines exactly how imaging devices connect and authenticate with receiving systems.
Structural Layer: DICOM specifies precise data formats for images, metadata, and associated clinical information.
Semantic Layer: DICOM includes standardized terminology for anatomical regions, imaging procedures, and equipment specifications.
The result? True plug-and-play interoperability that just works.
PIMS-LIMS Integration: Working, But at What Cost?
The integration between Practice Information Management Systems (PIMS) and Laboratory Information Management Systems (LIMS) represents veterinary medicine's other partial interoperability success. Your practice management system automatically receives results from IDEXX, Antech, Zoetis, and possibly other diagnostic laboratories without manual data entry.
But here's the critical insight: this interoperability comes at enormous hidden costs.
Every laboratory connection is completely proprietary:
Connecting to IDEXX lab data works differently than connecting to Antech
Antech integration differs from Zoetis integration
Each requires separate development efforts, different APIs, distinct data formats
When labs upgrade systems, integrations often break and require redevelopment
Additionally, limited interoperability exists with some PIMS systems, but these suffer from the same challenges. The same pattern exists within PIMS vendors themselves. Even single companies use different integration methods across their products:
IDEXX ezyVet integration differs from IDEXX Neo integration
Both differ from IDEXX Cornerstone integration
Each requires separate development and maintenance efforts
These integrations work at the connection and structural layers—labs successfully push results to practice management systems. But the absence of standards means that every integration is a custom engineering project.
The Hidden Costs of Proprietary Integration
Perfect addition! This is a crucial piece of the current landscape that shows both the demand for integration solutions and the limitations of approaches that don't address all three layers. Let me integrate this into the "Hidden Costs of Proprietary Integration" section:
The Hidden Costs of Proprietary Integration
Consider what this means for a third-party developer trying to build an AI clinical decision support tool:
To integrate with the top 5 PIMS systems, they need to build 5 completely different integrations
To connect with the top 3 diagnostic labs, that's 3 more unique integrations
Each integration requires different authentication, data formats, and update mechanisms
When any vendor updates their system, integrations may break
Supporting 8 systems means maintaining 8 separate codebases
The development effort doesn't scale linearly—it multiplies exponentially. This is why most veterinary AI tools remain isolated applications rather than integrated practice solutions.
The Integration-as-a-Service Band-Aid
Recognizing these challenges, several companies have emerged to act as integration intermediaries: VetData, Datapoint (acquired by IDEXX in 2017), and Bitwerx are examples of this approach. These services promise to solve the integration complexity by providing a single API that connects to multiple PIMS systems.
The value proposition is appealing: Instead of building separate integrations to ezyVet, Cornerstone, Neo, AVImark, and others, developers can integrate once with the intermediary service and gain access to all connected PIMS.
But these solutions face the same fundamental challenges, just centralized:
They still must build and maintain separate proprietary connections to each PIMS
Each PIMS system change still requires custom adaptation and testing
They create their own proprietary API layer, adding another integration dependency
When they add new PIMS support, existing integrations may need updates
They each only integrate with a small subset of PIMS systems
If the service provider goes out of business or changes focus, all dependent applications break
Most critically, these services operate only at the connection and structural layers. They can deliver patient demographic data and basic clinical information in a consistent format, but they don't solve the semantic layer problems. A diagnosis of "DM" in one practice still comes through as "DM" while "diabetes mellitus" in another practice remains "diabetes mellitus"—the terminology chaos persists.
The New Dependencies and Risks
Integration-as-a-service creates new categories of risk:
Single Point of Failure: Your application's connection to dozens of practices now depends on one intermediary service's uptime and performance.
Vendor Lock-In: Switching integration providers requires rebuilding connections, similar to switching PIMS vendors.
Cost Scaling: As these services grow and gain market power, they can increase pricing or change terms, affecting all dependent applications.
Feature Limitations: You're constrained by whatever data fields and capabilities the intermediary chooses to support across all connected PIMS.
The development effort doesn't disappear—it just gets centralized and creates new dependencies. While this approach can accelerate initial development, it doesn't solve the underlying interoperability problems and may actually make long-term solutions more difficult by entrenching proprietary approaches.
The Complete Failure: The Semantic Layer Problem
Even when we successfully connect systems and exchange data, we face veterinary medicine's greatest interoperability challenge: no two practices use the same terminology for anything.
The Tower of Babel Reality
Imagine you're building an AI system that needs to analyze treatment outcomes across practices. You successfully integrate with five different PIMS systems and can extract diagnostic and treatment data from all of them. But when you try to analyze the data, you discover that the same medical condition appears as:
Practice A: "DM" or "ENDO-01" (custom code)
Practice B: "Diabetes" or "Type 1 diabetes"
Practice C: "Endocrine disorder" or whatever the veterinarian feels like typing that day
All three practices are recording the same medical condition, but your AI system sees them as completely different diseases. Even sophisticated natural language processing struggles with this variability, especially when veterinarians use abbreviations, clinical shorthand, or practice-specific terminology.
The Multi-Practice Data Challenge
This semantic inconsistency makes several critical applications impossible:
Population Health Analytics: You can't track disease prevalence across practices when the same disease is recorded differently in each system.
AI Training Data: Machine learning models trained on Practice A's "DM" data won't recognize Practice B's "diabetes mellitus" as the same condition.
Quality Improvement: Comparing treatment outcomes requires first solving a massive terminology translation problem.
Research Collaboration: Multi-practice studies spend enormous effort harmonizing terminology before any actual analysis can begin.
The False Hope of Point-of-Care Coding
The obvious solution seems simple: just make everyone use standard codes like SNOMED-CT when entering data. This approach has failed everywhere it's been tried.
Veterinarians don't want to become medical coders. They're focused on patient care, not database management. Forcing structured data entry at the point of care slows down clinical workflows and increases cognitive burden. Even human medicine, with massive regulatory incentives and dedicated coding staff, struggles with coding accuracy and consistency.
But there's a deeper problem: forcing coding at the point of care fundamentally constrains clinical expressiveness.
Consider a complex case: a 12-year-old Golden Retriever with lethargy, mild azotemia, and a heart murmur that wasn't present six months ago. The veterinarian suspects early kidney disease but can't rule out cardiac involvement, and the breed predisposition for both conditions makes the diagnostic picture unclear.
Standard coding systems force this nuanced clinical picture into rigid categories. Is this "chronic kidney disease" or "heart murmur" or "lethargy"? The coding system demands a choice, but the clinical reality is uncertainty and interconnected possibilities. The veterinarian ends up either oversimplifying the case to fit the codes or spending excessive time trying to find codes that capture the full clinical complexity.
This loss of expressiveness isn't just inconvenient—it's clinically dangerous. When systems force artificial precision where uncertainty exists, they lose critical information about the veterinarian's clinical reasoning, differential considerations, and diagnostic confidence levels. Rich clinical narratives that capture the complexity and uncertainty of real cases get reduced to simplistic code combinations that miss the subtleties crucial for patient care.
The solution has to happen on the backend, not at the point of care. We need systems that preserve the full richness of clinical expression while providing standardized coding for data sharing and analysis.
Why This Matters More Than Ever: The AI Integration Imperative
The interoperability challenges that seemed merely inconvenient in the era of standalone software become critical as AI proliferates across veterinary practice.
AI Systems Need Comprehensive Data
Modern AI tools perform best when they can access complete patient information. An AI system analyzing radiographs benefits from knowing the patient's clinical history, laboratory results, and previous imaging studies. But when that information exists in incompatible formats with inconsistent terminology across multiple systems, the AI operates with incomplete understanding.
The Training Data Crisis
Large language models and machine learning systems require vast amounts of structured, consistent data for training. When veterinary data exists in semantic chaos across thousands of isolated systems, it becomes extremely difficult to aggregate for AI training purposes. This fragmentation may be limiting the development of powerful veterinary-specific AI tools.
Clinical Decision Support Failures
The most valuable AI applications provide real-time clinical decision support—suggesting differential diagnoses, flagging drug interactions, recommending diagnostic tests. These systems only work when they can access comprehensive, consistently coded patient data from all relevant sources.
Without semantic consistency, an AI system might miss that "DM" in the PIMS, "diabetes" in the lab results, and "high glucose" in the clinical notes all refer to the same condition requiring coordinated treatment.
The Path Forward: Backend Coding and Translation
The solution to veterinary interoperability lies not in forcing point-of-care standardization, but in intelligent backend processing that preserves clinical workflow while enabling data integration.
The Translation Layer Approach
Instead of making veterinarians code their entries, we need systems that:
Capture rich clinical narratives in whatever terminology veterinarians naturally use
Apply intelligent coding using natural language processing and veterinary-specific models
Map to standard terminologies like SNOMED-CT Veterinary Extension for data sharing
Maintain bidirectional translation so data can be shared in standard formats but displayed in familiar terminology
Learning from Human Healthcare
Human medicine is moving toward this approach with systems that:
Use natural language processing to extract coded concepts from clinical notes
Apply standard terminologies for data sharing while preserving original documentation
Leverage large language models to improve coding accuracy and consistency
Enable semantic interoperability without disrupting clinical workflows
Veterinary medicine needs equivalent systems adapted for multi-species complexity and veterinary-specific terminology.
The Technology Foundation Exists
The tools for solving veterinary semantic interoperability are available:
SNOMED-CT Veterinary Extension: Provides standardized codes for veterinary diagnoses, procedures, and clinical findings across species. [For a detailed exploration of how SNOMED-CT can be implemented in veterinary practice without disrupting clinical workflows, see my previous article on veterinary terminology standardization.]
Veterinary Natural Language Processing: Emerging models trained specifically on veterinary clinical text can identify and code medical concepts automatically.
Translation Mapping Services: Systems that can map between different terminology systems and learn from usage patterns across practices.
Modern APIs and Data Standards: HL7 FHIR provides the structural foundation for healthcare data exchange and can be adapted for veterinary use.
Key Insights for Veterinary Practice
🔍 Understand the Hidden Integration Costs: When evaluating software, ask about integration capabilities and ongoing maintenance costs. Proprietary integrations may work initially but create long-term dependency and upgrade risks.
📋 Document Your Terminology Patterns: Start cataloging how your practice names conditions, procedures, and findings. This prepares you for backend coding systems and reveals opportunities for internal consistency improvements.
🔄 Evaluate Integration Scalability: Choose software vendors that are moving toward standards-based approaches rather than purely proprietary solutions. Ask specifically about FHIR support and standard terminology adoption plans.
🤝 Support Backend Coding Initiatives: Look for AI tools and practice management systems that offer intelligent coding services rather than forcing manual standardization at data entry.
📊 Prepare for Semantic Integration: Understand that the most powerful AI applications will require consistent terminology across systems. Practices that invest in backend translation capabilities will have significant advantages.
🏗️ Think Long-Term: Make technology decisions with semantic interoperability in mind. Systems that can export and import standard-coded data provide more flexibility as translation services become available.
💡 Demand Transparency: Ask vendors specifically about their semantic layer capabilities. How do they handle terminology variation? What standard codes do they support? How do they plan to enable data sharing across practices?
Conclusion
Veterinary medicine has achieved partial interoperability through enormous investment in proprietary solutions. We can exchange data between PIMS and LIMS, integrate third-party applications, and build functional software ecosystems. But we've done it the expensive, non-scalable way.
The proliferation of AI tools makes this approach unsustainable. Every new integration requires custom development. Every vendor change breaks existing connections. Most critically, the semantic chaos prevents us from building the intelligent, data-driven practice management systems that could transform patient care.
The solution isn't forcing veterinarians to become medical coders. It's building intelligent backend systems that can translate between veterinary terminology and standard codes, enabling semantic interoperability without disrupting clinical workflows.
The economic incentives are aligning as AI companies need consistent training data, practice management vendors need competitive differentiation, and veterinary practices need systems that actually work together. The foundation exists through DICOM's success, emerging veterinary terminology standards, and advancing natural language processing capabilities.
The choice is clear: continue with expensive proprietary solutions that don't scale, or coordinate industry-wide on comprehensive standards that enable the AI-powered, integrated veterinary practices of the future.
What integration challenges frustrate you most in daily practice? Have you experienced the hidden costs of proprietary integrations when systems get upgraded or vendors change? Share your experiences—understanding real-world integration pain points helps identify where standardization efforts should focus first.