SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Flux result 245516bc cc32 40f5 ab26 ba7b74459367

ISACA launches AI risk certification amid governance gap

Fri, 24th Apr 2026 (Today)

ISACA has launched the Advanced in AI Risk certification for professionals who manage AI risk, as its European research points to weak governance around AI use.

The new credential, known as AAIR, is aimed at experienced practitioners in audit, risk, security, privacy and compliance who are being asked to oversee AI systems within their organisations. It is intended for people with established risk backgrounds rather than newcomers to the field.

Its release follows survey findings suggesting many organisations have adopted AI faster than they have built the controls to supervise it. Among digital trust professionals surveyed across Europe, 59% said they did not know how quickly their organisation could halt an AI system during a security incident. Only 21% said their organisation could do so within half an hour.

The research also highlights gaps in accountability. One in five respondents said they did not know who would be responsible if an AI system caused harm. Only 42% were confident their organisation could investigate and explain a serious AI incident to leadership or regulators, and just 11% were completely confident.

Governance gap

Other findings suggest the issue extends beyond technical controls. A third of organisations do not require employees to disclose when AI has been used, and only 38% identify the board or an executive as the ultimate owner of AI risk.

Taken together, the results suggest AI is being introduced into core business processes without clear reporting lines, formal accountability or a defined incident response structure. They also come as organisations in Europe face closer scrutiny over how they govern AI systems under the EU AI Act.

ISACA has positioned AAIR as a response to that pressure. The certification focuses on three areas: AI risk governance and framework integration, AI lifecycle risk management, and AI risk programme management.

The qualification is designed to test whether candidates can evaluate AI-related vulnerabilities, assess business impacts and govern AI across its lifecycle. That includes assessing risks before and after deployment and explaining an organisation's risk posture to a board or regulator.

Broad entry

Eligibility for AAIR is tied to one of 25 prerequisite certifications. These include CISA, CISM, CRISC, CGEIT, CDPSE, CGRC and CISSP, reflecting how AI risk oversight is being absorbed into a broader range of professional roles rather than remaining within traditional IT risk teams.

The broad entry criteria suggest ISACA sees AI governance as a cross-functional discipline spanning assurance, cyber security, privacy, compliance and risk management. That approach mirrors how organisations are distributing responsibility for AI decisions across multiple control functions.

Chris Dimitriadis outlined ISACA's view in remarks released alongside the launch.

"The enthusiasm to adopt AI has outpaced the skills to govern it. Many organisations cannot tell you how quickly they could stop an AI system, who is accountable if it goes wrong, or how they would explain a failure to a regulator. That is not a technology problem - it is a governance and skills problem," said Chris Dimitriadis, Chief Global Strategy Officer at ISACA.

He also argued that established risk disciplines can be applied to AI oversight.

"The tools to manage AI risk already exist. Risk management, prevention controls, detection, incident response and recovery are all foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour. AAIR exists to build the profession that can do that work. Closing the governance gap will take more than a handful of experts - we all need to be involved," Dimitriadis said.

The certification launch is part of a wider push by professional bodies and training providers to define AI governance as a specialist area of practice. As companies move AI tools into business operations, they are under pressure to show they can identify failures, assign responsibility and respond to incidents in ways that stand up to board and regulatory scrutiny.

ISACA's survey was based on responses from 681 digital trust professionals in Europe working in IT audit, governance, cyber security, privacy and emerging technology roles. A recurring theme was not simply uncertainty over AI systems themselves, but uncertainty over who owns the risk when those systems fail.