HIMSS26 is happening this week in Las Vegas, and the conference programming tells you exactly where healthcare IT leadership anxiety is concentrated. The two dominant themes across 600+ sessions aren't new technology categories or shiny AI demos. They're AI governance and cybersecurity resilience. And increasingly, the industry is treating them as the same conversation.
This isn't a surprise to anyone who's been watching the regulatory trajectory, but it is a milestone. When the largest healthcare IT conference in the world organizes its entire programming slate around governance and security, it signals that the industry has moved past "should we govern AI?" and into "how do we govern AI before something goes wrong?"
The Maturity Gap
The core tension at HIMSS26 is a familiar one, but it's getting worse. Healthcare organizations are deploying AI into clinical workflows, administrative operations, and patient-facing systems faster than they're building the structures to govern it. The enthusiasm is ahead of the infrastructure.
CereCore's Phil Sobol put it directly: the speed of AI deployment has significantly outpaced the organizational frameworks designed to govern it. Health systems that have established AI governance councils with clear ownership and formal oversight structures are more resilient. The ones that haven't are accumulating risk with every new AI tool they adopt.
This maturity gap shows up in predictable ways. Health systems don't know which AI tools are being used across their network. Clinical teams adopt AI assistants without formal approval processes. AI-generated outputs flow through EHR systems without policy checks. PHI ends up in contexts it shouldn't, not because of malicious intent, but because there's no automated enforcement layer preventing it.
The gap isn't about awareness. Healthcare IT leaders know governance matters. The gap is about operational capability. Knowing you need governance and having enforceable governance running across every surface where AI touches patient data are two very different things.
Why Governance and Cybersecurity Are Converging
HIMSS26 isn't treating AI governance and cybersecurity as separate tracks that happen to share a conference. They're converging because the threat model is converging.
Traditional healthcare cybersecurity focused on perimeter defense, endpoint protection, and data encryption. The threats were external: ransomware, phishing, unauthorized access. The controls were infrastructure-level: firewalls, identity management, network segmentation.
AI introduces a different attack surface. The threats aren't just external actors trying to break in. They're internal systems generating, processing, and distributing content that may violate policies, expose PHI, or make clinical recommendations without appropriate safeguards. An AI chatbot that surfaces a patient's medication history in a non-secure channel isn't a cybersecurity breach in the traditional sense. It's a governance failure. But the impact on patient privacy is identical.
This is why HIMSS26's cybersecurity sessions include topics like adversarial attacks on diagnostic algorithms, data poisoning in predictive models, and AI-driven phishing. The attack surface now includes the AI systems themselves, not just the infrastructure they run on.
For health IT leaders, this convergence means the cybersecurity team and the AI governance team can't operate independently. The CISO needs visibility into what AI systems are doing with patient data. The AI governance council needs cybersecurity controls embedded into AI workflows. The compliance team needs evidence that covers both domains.
What "Formal Oversight Structures and Tools" Actually Means
The most telling phrase from the HIMSS26 coverage is "formal oversight structures and tools." Health systems with these are more resilient. Health systems without them are exposed.
But what does this look like in practice? A governance council is a start, but a council is a committee. Committees meet monthly. AI systems generate content continuously. The gap between "the committee approved this AI tool" and "this AI tool is being governed in real time" is where the risk lives.
Formal oversight structures need three operational components.
First, policy definition. The organization's rules about how AI can interact with patient data, what clinical recommendations require human review, what content can be shared externally, and what disclosures are required need to exist as enforceable rules, not as PDFs in a SharePoint folder. These rules need to be specific enough to evaluate against (not "protect patient privacy" but "block external transmission of content when PHI has been accessed in the current workflow"), and they need to be versioned so auditors can see what was enforced at any point in time.
Second, continuous enforcement. Policies that aren't enforced are just documentation. Enforcement means that every AI-generated output, every document share, every agent action is evaluated against the organization's policies before it reaches the patient, the provider, or the external recipient. This evaluation has to be automated because the volume of AI-generated content in a health system makes manual review physically impossible. A single AI chatbot handling patient intake might generate thousands of interactions per day. No compliance team can manually review that volume.
Third, audit trail generation. When CMS, OCR, or a state attorney general asks "how did you govern your AI systems during this period?", the answer needs to be a report, not a retrospective investigation. Every policy evaluation, every violation detected, every enforcement action taken should produce an immutable record. The audit trail isn't something you assemble before an audit. It's something your governance system generates automatically as a byproduct of doing its job.
The Four Questions Every Health System Should Answer
Based on what HIMSS26 is surfacing, here are the questions that health IT leaders should be able to answer by the end of 2026.
What AI systems are operating in our environment, and what data do they access? Shadow AI is the healthcare equivalent of shadow IT, except the stakes involve PHI instead of marketing spreadsheets. If you don't have an inventory of every AI tool, agent, and integration running in your environment, you can't govern what you can't see.
Are our HIPAA policies enforced automatically across every surface where AI generates or processes content? Not "do we have policies" but "are they enforced." An AI assistant in the EHR, a chatbot on the patient portal, an AI agent handling referral coordination, a clinical documentation tool generating summaries. Every one of these surfaces needs policy enforcement. If your governance only covers one of them, the others are ungoverned.
Can we demonstrate to an auditor what policies were active, what violations were detected, and what actions were taken during any given time period? The HIPAA Privacy Rule updates taking effect this year are the most significant in over a decade. The minimum necessary standard is getting teeth. If your AI systems are processing PHI without documented governance controls, the question isn't whether you'll face scrutiny. It's when.
Do our AI governance controls extend to AI agents that take multi-step actions? This is the emerging frontier. AI agents that read patient records, draft communications, and send referrals are operating across multiple systems in a single workflow. Individual actions might be compliant. The sequence might violate minimum necessary standards, disclosure requirements, or consent policies. Session-aware governance that tracks what data the agent accessed across its entire workflow is becoming a requirement, not a nice-to-have.
The HIMSS26 Signal
The signal from HIMSS26 is clear: AI governance in healthcare has moved from "emerging best practice" to "operational requirement." The organizations that built governance infrastructure early are more resilient, more confident in their AI deployments, and better positioned for regulatory scrutiny. The organizations that haven't are running production AI systems on hope and manual review.
The convergence with cybersecurity makes this more urgent, not less. AI governance isn't a compliance exercise you handle separately from your security posture. It's an extension of your security posture into a new surface: the content and decisions your AI systems generate.
Health systems that invest in automated policy enforcement, continuous monitoring across all AI surfaces, and audit-ready evidence generation aren't just reducing compliance risk. They're building the operational foundation that makes AI deployment sustainable. Without that foundation, every new AI tool adopted is another ungoverned surface accumulating risk.
HIMSS26 is telling the industry what practitioners already know: AI without governance isn't innovation. It's liability.
We're building Aguardic to help healthcare organizations enforce their policies across AI outputs, documents, code, and agents, with real-time evaluation and audit-ready evidence. If your health system is building AI governance infrastructure, we'd like to help.



