I have sat through more compliance conversations in healthcare than I can count.
The pattern is something like this: a vendor says they are HIPAA compliant. The buyer nods. Someone asks about encryption. The vendor says yes. Someone asks about the Business Associate Agreement. The vendor says yes. And the conversation moves on to features.
That is how most healthcare organizations end up with HIPAA-compliant AI agents that are not actually HIPAA-compliant in any meaningful operational sense. They checked the boxes. They did not check the architecture.
I am writing this because the marketplace we are building at DRUID sits in the middle of this problem. We onboard publishers who build healthcare solutions. We work with buyers who deploy them in regulated environments. And we see, repeatedly, the gap between what compliance means on a vendor's website and what it means in a production deployment.
What HIPAA actually requires from a conversational AI system
Let me be direct about this because the confusion is widespread.
HIPAA is not a product certification. There is no HIPAA seal. No government agency tests your chatbot and stamps it approved. What HIPAA requires is a set of administrative, physical, and technical safeguards for any system that creates, receives, maintains, or transmits protected health information.
For HIPAA-compliant AI agents, that means the following has to be true, in production, not just in a security questionnaire.
- All patient data must be encrypted in transit and at rest. That includes conversation logs, transcripts, session metadata, and any data extracted during intake or scheduling. TLS 1.2+ for transit. AES-256 for storage. Non-negotiable.
- Access controls must enforce least privilege. Not everyone who touches the platform should see patient data. Role-based access, audit trails, and session timeouts. The system has to know who accessed what, when, and why.
- A Business Associate Agreement must be in place between the healthcare organization and the AI vendor. If the vendor handles PHI, they are a business associate under HIPAA. If they tell you a BAA is not necessary, that is a red flag, not a simplification.
- Audit logging must be immutable and retained. Every interaction that involves PHI needs a tamper-proof log. This is not optional. It is how you demonstrate compliance during an audit or a breach investigation.
- Breach notification procedures must exist and be tested. If patient data is exposed, HIPAA requires notification within 60 days. The vendor needs to have a documented process, not a promise.
The questions most buyers do not ask (but should)
Here is where the compliance conversation usually falls apart.
Buyers ask: “Are you HIPAA compliant?” That is the wrong question. The right questions are:
- Where does patient data reside during a conversation? Is it processed in memory only, or is it stored? If stored, where? On what infrastructure? In what jurisdiction?
- What happens to conversation data after the session ends? Is it retained? For how long? Who can access it? Can it be purged on demand?
- Does the LLM powering the agent have access to PHI? If the system uses a third-party model, does patient data leave the environment? Is the model fine-tuned on patient data? If so, where and how is that data governed?
- Can the system be deployed on-premises? For organizations that cannot allow PHI to leave their network, cloud-only solutions are a non-starter regardless of what certifications they hold.
- What happens during a model update? When the vendor pushes a new version, does that change the data flow? Does it introduce new third-party dependencies? Does the BAA cover the updated architecture?
I am not listing these to be adversarial. I am listing them because I have seen deployments delayed by months when a CIO or a CISO discovers mid-implementation that the architecture does not match what was presented during the sales process.
If you are building in-house rather than evaluating a vendor, we have a separate technical walkthrough on how to build a HIPAA-compliant chatbot that covers the implementation step by step.
Why certifications matter but are not sufficient
SOC 2 Type II, ISO 27001, GDPR, HITRUST. These are all important. They represent independent verification that a vendor has implemented security controls and follows them consistently.
But a certification tells you what controls exist at the platform level. It does not tell you how a specific solution, built by a specific publisher, handles data in a specific workflow.
This is the gap that matters in a marketplace. The platform can be SOC 2 certified. The individual healthcare solution running on that platform still needs to be designed with PHI handling in mind. The data flow from the patient's message to the EHR record needs to be mapped and governed.
At DRUID, we handle this at both layers. The platform holds SOC 2 Type II, ISO 27001, and GDPR certifications. DRUID has been security-vetted by the U.S. Air Force and Israeli Air Force. On-premise deployment is available. But we also require that healthcare solutions on the marketplace follow specific data handling standards before they go live.
That distinction between platform compliance and solution compliance is something buyers should ask about with any vendor. If the vendor cannot explain both layers clearly, that is a problem.
On-premise is not a legacy requirement. It is a compliance tool.
There is a narrative in the market that on-premise deployment is old-fashioned. That everything should be in the cloud. Modern architectures do not need local infrastructure.
In healthcare, that narrative hits a wall fast.
Some health systems cannot, by policy or by regulation, allow PHI to leave their network. Period. It is not a preference. It is a constraint. And for those organizations, any AI agent that processes patient conversations must run entirely within their infrastructure.
That is why on-premise deployment is not a nice-to-have on the DRUID platform. It is a standard option. The same agent that runs in the cloud for one customer runs on-premise for another. Same functionality, same integrations, same EHR connectors. The deployment model changes. The capabilities do not.
One of the largest children's hospitals in the U.S. deployed DRUID for patient scheduling and intake automation. 4.3 million patient encounters per year. 15,000 medical record updates per week. The agent runs 24/7 in English and Spanish. Compliance was a prerequisite, not a feature. It had to be in place before a single patient interaction occurred.
EHR integration is where compliance gets tested in production
I want to flag this separately because it is the place where theoretical compliance meets operational reality.
When an AI agent books an appointment, it writes to Epic or Cerner. When it collects insurance information during intake, it pushes data to the EHR. When it verifies eligibility, it pulls data from a payer system.
Each of those data movements is a PHI transaction. Each one needs to be encrypted, logged, authorized, and recoverable.
DRUID integrates through standard APIs and FHIR connectors. The EHR stays the system of record. The AI agent is the conversational layer. But the integration itself has to be built with compliance in mind. Credentials management, token handling, error logging, and retry logic that does not inadvertently cache PHI.
This is not glamorous work. It is the kind of engineering that never appears in a product demo. But it is the difference between a system that passes an audit and one that creates a liability.
What a compliant deployment actually looks like
Let me bring this together with what a real deployment path looks like, not the theoretical version.
A healthcare organization evaluates prebuilt AI agents for enterprises on the DRUID Marketplace. Before any patient data touches the system, four things happen.
- The BAA is executed. DRUID signs as a business associate. The terms cover the specific data types, the specific workflows, and the specific infrastructure.
- The deployment model is confirmed. Cloud or on-premise. If on-premise, the infrastructure is provisioned within the customer's network.
- The EHR integration is configured. FHIR connectors, API credentials, and data mapping. The integration is tested with synthetic data before any PHI flows.
- Access controls and audit logging are configured. Roles, permissions, retention policies. The system is ready for an audit before it goes live.
Then it goes live. The agent starts handling patient interactions. Scheduling, intake, insurance verification, and reminders. Every interaction is logged, encrypted, and governed.
That is what HIPAA-compliant AI agents look like in practice. Not a badge on a website. A set of operational controls that are in place before the first patient message.
Compliance is a prerequisite, not a differentiator
I will end with a perspective that might be unpopular with some vendors.
Compliance should not be a selling point. It should be a given. The moment a vendor positions HIPAA compliance as a feature rather than a baseline, they are telling you something about the maturity of their platform.
The DRUID Marketplace exists so that healthcare buyers can focus on which workflow to automate first, not on whether the system can be trusted with patient data. That question should already be answered before the conversation starts.
If you are evaluating conversational AI for healthcare, start with compliance. Verify it independently. Ask the hard questions. And then move on to the work that actually matters: reducing the operational drag in patient intake, scheduling, and access workflows that your teams are still running manually.
Browse HIPAA-compliant healthcare solutions on the DRUID Marketplace →