SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ai humanoid sharing data brain with doctor in secure hospital corridor

Healthcare AI pilots outpace sector's regulatory readiness

Mon, 9th Mar 2026

Kyndryl has published research highlighting a growing mismatch in healthcare between interest in artificial intelligence and the operational readiness needed to deploy it at scale under regulatory and compliance constraints.

Its Healthcare Readiness Report found that 76% of healthcare organisations have more AI pilots than they can scale. Only 30% feel prepared to keep pace with evolving regulation, while 55% said they were concerned about keeping up with policy and regulatory change.

The findings reflect a common pattern across health systems and providers: AI projects often begin in narrow settings but struggle to move into routine clinical and administrative use. As projects expand beyond small trials, governance, audit requirements, privacy rules, cyber risk, and procurement controls often become bigger constraints.

Pilots and pressure

Healthcare organisations face operational strain alongside rising expectations around service quality. Many also operate across multiple jurisdictions and regulatory regimes. The report links this environment to growing interest in AI for clinical and operational work, while noting the organisational barriers that can limit expansion.

Regulatory and compliance concerns were cited by 31% of respondents as a key barrier to scaling AI beyond pilots, reinforcing the report's broader finding that readiness for regulatory change remains low across the sector.

Kyndryl framed the results as a call for healthcare leaders to modernise governance and compliance as they increase AI use, warning that without stronger controls many organisations will remain stuck in experimentation.

Policy as code

Alongside the report, Kyndryl highlighted a product update focused on compliance automation. It recently launched what it calls a "policy as code" capability, which converts regulatory requirements and operational controls into machine-readable policies.

These policies are intended to govern how agentic AI workflows operate, providing consistent enforcement and auditability when AI is embedded into clinical and operational environments.

Christine Landry, Global Vice President for Healthcare at Kyndryl Consult, said compliance needs to be designed into AI programmes from the start.

"Healthcare organisations are operating in one of the most complex regulatory environments in the world, and as AI becomes embedded into clinical and operational workflows, compliance can't be an afterthought. Our policy as code capability allows healthcare providers to translate regulatory, security, and organisational policies directly into their digital and AI systems. Enabling these guardrails not only helps ensure consistent compliance but also strengthens resilience by reducing operational risk and helping systems withstand evolving cyber and privacy threats. This gives clinicians and administrators greater confidence that compliance, safety, governance, and protection are built in from the start, not bolted on later."

Early deployments

Kyndryl pointed to healthcare work in Spain as an example of how it is applying this approach. Servei de Salut de les Illes Balear (the Balearic Islands Health Service) is collaborating with Kyndryl on an AI-enabled platform to support advanced clinical-genomic analysis. The project is positioned as improving diagnostic timelines while maintaining security and data protection requirements.

It also referenced a collaboration with the University of Liverpool's Civic Health Innovation Labs. Under the arrangement, Kyndryl is applying its Agentic AI Framework alongside academic research expertise on next-generation healthcare technologies.

Kyndryl said the initiative aims to move AI beyond pilots through practical blueprints linked to patient engagement and scalable healthcare outcomes. The work reflects a wider trend of vendors partnering with universities and clinical organisations to test AI tools in operational settings against real-world constraints.

Governance focus

The report's headline figures underline a central tension for healthcare IT and digital leaders. Many organisations have a backlog of pilots and proofs of concept but lack repeatable governance models for broad deployment. That typically includes consistent risk assessment, model monitoring, data quality controls, access management, and clear accountability for how systems are used and updated.

As AI expands in clinical and operational contexts, organisations also face questions about traceability and audit trails. They need to show how decisions were supported, which data was used, and what safeguards were applied-especially in regulated environments and where patient safety and privacy obligations apply.

Kyndryl said it is working with healthcare organisations globally on modernisation programmes and responsible AI adoption, as the sector looks to scale AI while meeting regulatory requirements and strengthening governance across clinical and administrative workflows.