Why AI Adoption Stalls in the Middle, and What Actually Moves it

Most large healthcare organizations have solved the C-suite buy-in problem. The harder problem — the one that actually determines whether an AI program scales or plateaus — are the layers directly below.

Share

Many large healthcare organizations no longer have a CEO enthusiasm problem around AI. They have an adoption problem in the layer below the C-suite.

audio-thumbnail
Why AI Adoption Fails
0:00
/772.235011

Budgets have been allocated, pilots have been run, and most large payer and provider organizations can point to at least one AI initiative that produced a measurable outcome. What has not been solved is the harder question: how do you get vice presidents and senior directors to change staffing, prioritization, workflow design, and day-to-day operating habits?

What has not been solved — and what I see consistently when advising organizations on AI operating models — is the layer directly below. The vice presidents and senior directors who control day-to-day budget decisions, staffing, and prioritization are where AI programs quietly lose momentum. They were not part of the original mandate. They are not personally invested in the tooling. They have competing operational pressures. And most AI enablement programs are not designed with them in mind.

The organizations I have seen scale AI beyond pilots tend to do three things well. They tie adoption to operating accountability. They convert senior leaders from passive supporters into active models. And they clear governance before people build habits around a capability. CEO enthusiasm helps, but it is not the mechanism that drives adoption at scale.

AI adoption does not scale because the CEO is excited. It scales when budgets, leader behavior, and governance timing reinforce the same workflows.
Structural Accountability AI sits inside the operating plan, not beside it.
Leadership Conversion Senior leaders model real use and make wins visible.
Governance First Capabilities are cleared before habits form.

The Mandate Has to Be Structural, Not Aspirational

Some of the most effective AI adoption programs I have seen did not start with a change-management campaign. They started with a budget decision. Business unit leaders were given explicit productivity targets and expected to show where AI would help close the gap. Progress was reviewed monthly.

That structure sounds blunt. In practice, it is clarifying. Once AI efficiency is part of the operating plan, the question stops being whether to prioritize it and becomes where it can credibly change cost, throughput, or service levels. A fundamentally more productive conversation.

The alternative is the pattern many organizations are still living with now: AI sits beside the budget rather than inside it. That usually produces a few motivated teams with real gains and a much larger middle of the organization where adoption stays neutral.

At this stage, this is less a tooling problem than a decision-rights problem. Until AI efficiency is tied to a number a business leader is accountable for, it will keep losing to other operational priorities.

The Real Adoption Work Happens in the VP Layer

Once the CEO-level mandate exists, the hardest population is usually not the top team. It is the vice presidents and senior directors making the actual day-to-day trade-offs about staffing, planning, workflow ownership, and process change.

What moves that layer most consistently is not another dashboard or another business case. It is repeated exposure to concrete wins, combined with leadership behavior that makes AI feel normal rather than optional.

When measurable examples show up consistently in town halls, operating reviews, leadership notes, and team meetings, the dynamic changes. Not because any single example is overwhelming on its own, but because AI becomes a recurring feature of how leaders talk about performance. Over time, the leaders who are not referencing AI outcomes begin to stand out.

That is only part of the mechanism. The other part is Executive AI Maturity. The strongest programs I have seen gave senior leaders recurring access to a dedicated AI practitioner who worked through that executive’s actual workflows, not a generic demo. The goal was not abstract awareness. It was personal fluency. They get an idea of what good looks like.

When that shift happens, the adoption dynamic inverts. Instead of the AI program pushing capability into the organization, leaders begin pulling it. Executives ask better questions. They reference their own use cases. They notice where AI is absent from workflows where it should plausibly be present. This investment is not trivial, and requires operational commitment. But, the downstream adoption can be larger than training or tooling access alone.

Curated user communities reinforce this. A well-run channel with practical, specific, peer-visible examples will usually move behavior faster than broad encouragement or training alone. Abstract support does not change habits. Specific examples do.

When senior leaders have personal fluency with AI tools, they begin to expect AI-assisted work from their teams. That expectation is a reliable adoption driver above mandates or metrics.

Governance Is Where Adoption Programs Get Surprised

One of the most predictable failure modes in healthcare AI adoption is governance arriving too late. A capability gets enabled, users build workflows around it, and only then does a broader governance review restrict or roll it back.

Healthcare organizations have AI review boards, risk policies, and data governance requirements that are — appropriately — more stringent than many other industries. What happens in practice is that a capability gets enabled for an initial population, adoption begins, users start building workflows around it, and then the capability gets restricted or rolled back when the broader governance review reveals that it was not cleared for the wider deployment context.

The operational and cultural damage from a rollback is disproportionate to the technical change. Users who built habits around a capability lose confidence in the program. The communication burden falls back on the same leaders who spent months building momentum. And the explanation — which usually involves AI review board policies and data usage rights that are genuinely necessary — is difficult to communicate in a way that feels coherent rather than arbitrary.

The organizations that handled this best treated governance as an intake requirement, not a late-stage checkpoint. That added time to the initial rollout, and in some cases constrained which capabilities could be offered. That slowed some deployments at the front end, but it materially reduced the rollback problem later.

The practical implication for healthcare leaders: before your adoption program builds organizational habits around a capability, confirm that capability's status with your governance structure. What your AI vendor enables by default is not necessarily what your risk and compliance function will approve for production at scale. In healthcare specifically, that gap tends to be larger than expected.

Where ROI Gets Easier to Defend

The clearest AI returns in payer and provider organizations usually come from workflow-integrated use cases where the baseline is measurable and the gain can be captured inside an operating budget.

Contact center call summarization is a good example. When after-call work drops inside the CRM, the gain is measurable, attributable, and visible to the budget owner. In those cases, ROI stops being a broad productivity claim and starts looking more like an operating result.

The same pattern shows up in areas like specialty pharmacy and clinical documentation support. When AI shortens turnaround times on denial letters, letters of medical necessity, or chart synthesis, the value is not just less effort per task. It can change service levels, backlog dynamics, and capacity.

That is different from broad personal productivity tools. General-purpose chat access and enterprise copilots can create real value, but the value is usually more diffuse and harder to defend at the workflow level. That does not make them less useful. It means they need a different accountability model: aggregate attribution at the business unit level, worked out with the CFO, rather than workflow-level measurement.

Both types of investment have a role. But conflating them — expecting workflow-level accountability from a personal productivity deployment, or treating a workflow-integrated tool as if it just needs broader access to produce returns — is where a lot of AI investment rationales go wrong.

Not all AI value should be measured the same way.

Personal productivity toolsWorkflow-integrated AI
ExamplesCopilot, ChatGPT Enterprise, general chatCall summarization, LMN drafting, claims or appeals support
Value patternReal but diffuseDirect and capturable
Best accountability modelBusiness-unit or enterprise-level attributionWorkflow-level measurement and budget capture
Main adoption driverExecutive modeling + peer examplesWorkflow fit + process redesign
Primary mistakeOverstating ROIUnderinvesting in deployment

The Organizational Pattern That Works

The organizations getting measurable value from AI in healthcare are not the ones with the loudest CEO enthusiasm or the highest license counts. They are the ones that built an adoption operating model.

In practice, that operating model has three parts: structural accountability, leadership behavior that turns AI into a normal management expectation, and governance that clears the path before habits form.

That is the distinction I would emphasize most strongly. AI adoption does not scale because the C-suite is excited. It scales when budgets, leader behavior, and governance timing reinforce the same set of workflows.

That framing shift is what separates programs that produce durable outcomes from programs that produce compelling demos.