SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ricard photo

Exclusive: Should AI in healthcare be paused until its decisions can be explained?

Tue, 23rd Sep 2025

As artificial intelligence rapidly integrates into hospitals, clinics and research labs across the world, one question is being asked in almost every country: Should AI in healthcare be paused until its decisions can be fully explained?

It's a debate gripping not just scientists, but governments, medical professionals and the public.

For Ricard Gavaldà, a leading voice in AI and healthcare, the answer is no - instead we must proceed with caution.

"We are still far away from systems that can be 100% reliable in decision-making," he said in an exclusive interview. "The question is not who does better, you or the algorithm. It's whether you with the algorithm do better than you without the algorithm."

Gavaldà, a professor and researcher in machine learning with more than 20 years' experience, is best known for his work integrating AI into health systems.

As AI tools become more powerful, a wave of debate is building around explainability: if we don't fully understand why an AI system makes a particular decision, should we be using it at all?

Gavaldà argues that this fear, while understandable, should not halt progress.

"I can't think of an area of healthcare where there is no potential with AI," he said. "The level of caution, however, should be different. Obviously, areas where the impact of a decision can make more harm should be introduced gradually."

Instead of pausing, he suggests a more pragmatic approach: start where the stakes are lower.

"Making a mistake in the prediction of how many beds will be needed tomorrow is far less harmful than making a mistake when diagnosing someone with a lethal disease," he explained. "That's why at Amalfi we bet on management problems as the easiest way for introducing AI in healthcare."

These "management problems" include tasks that are typically invisible to patients but critical to hospital operations, such as resource planning, protocol compliance, and administrative optimisation. "There is huge room for improvement in processes, resource allocation, and protocol adjustment," he said. "This frees up resources that allow advances in all of the more high-profile areas."

But the focus on behind-the-scenes applications should not be mistaken for a lack of ambition. Gavaldà points to several areas where AI is already showing strong results, particularly in image-based diagnostics. "It is already established that algorithms outperform human experts in some cases," he noted. "Even here, the professional should have the final word always."

He also sees promise in "precision medicine", where vast quantities of patient data are analysed to recommend tailored treatments. However, he cautions: "There's still a lot of research to do and infrastructure to build before this can be scaled sustainably."

Asked about the biggest barriers to AI's widespread adoption in healthcare, Gavaldà pointed to three: data silos, legal uncertainty, and slow-moving legislation.

"Data is often fragmented across multiple hospital databases from different vendors that don't collaborate," he said. "And even if the data can be accessed, professionals are naturally reluctant to use AI tools if they don't know the legal implications. What if I follow the algorithm and it's wrong? What if I ignore it and it's right?"

This legal uncertainty is unlikely to be resolved until the courts have ruled on real-world cases. "It's a chicken-and-egg problem," he added.

On ethics, Gavaldà is outspoken - particularly when it comes to data privacy. He believes patients should be more open to sharing anonymised health data for research.

"Privacy, like most rights, has its limits when it hits the rights of others," he said. "Most of us accept that some of our earnings are taken as taxes for the greater good. Similarly, we should accept to 'pay' some part of our privacy, under strict safeguards, for research."

He's also concerned about fairness and access. "These tools should not just benefit those who can pay for them. We need to ensure they're extended to the whole population."

Despite concerns that AI could one day replace human judgement, Gavaldà is not alarmed. "I don't see the possibility of that happening anytime soon, perhaps gradually in 30 to 100 years," he said. "But there will be a time when AI will be so much better than humans that the only ethical thing will be to let them decide."

To those who find that vision unsettling, he offers this comparison: "We've all accepted that road traffic lights and train junctions are managed by machines because they make fewer mistakes than humans. Something similar will happen with trains soon, no human drivers at all."

For now, though, he urges balance, transparency, and rigorous oversight. "We need good mechanisms to ensure data is used ethically, for well-defined purposes, and by the right agents."

And while he's optimistic, he insists that AI must always serve people, not replace them.

"The question isn't man versus machine," he said. "It's how we can do better together."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X