SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Sydney glass tower night ai brain screens shadow ai locks shields

AI security gaps expose ‘shadow AI’ risk in Australia

Wed, 28th Jan 2026

Okta has published results from an Australia-focused poll that points to strong enthusiasm for artificial intelligence among security and technology leaders, alongside gaps in governance, monitoring and identity controls for AI agents and other non-human users.

The Okta AI Security Poll drew responses from hundreds of technology and security executives at the company's Oktane on the Road events in Sydney and Melbourne. The results suggest many organisations have not settled internal accountability for AI security risks, even as AI use spreads across workplaces.

One of the clearest findings concerned ownership of AI risk. Some 41% of respondents said no single person or function currently owns AI security risk in their organisation. A further 34% said the Chief Information Security Officer or security function holds accountability.

The data also indicated low confidence in detecting unintended or unauthorised AI behaviour. Only 18% of respondents said they were confident they could detect if an AI agent acted outside its intended scope. Okta said 40% were not confident and 22% do not currently monitor AI agent activity.

Security blind spots

Respondents identified unapproved AI usage as their leading concern. Shadow AI, defined in the poll as unapproved or unmanaged tools, ranked as the top AI security blind spot for 35% of respondents. Data leakage through integrations followed at 33%.

The poll also assessed readiness of identity and access management for non-human identities such as AI agents, bots, and service accounts. Only 10% said their identity systems were fully equipped for that task. A further 52% said their systems were partially equipped, which points to a maturity gap as automation increases the number of machine users inside corporate environments.

Board-level attention appears to be improving, according to the survey, though engagement remains uneven. Okta said 70% of respondents reported board awareness of AI-related risks. Only 28% said boards are fully engaged in oversight. Another 21% reported limited awareness, and 8% said AI has not yet been discussed at board level.

Balancing approaches

The poll results suggest organisations have adopted different postures on AI deployment. Okta said 58% of respondents described their approach as balanced, innovating with governance in mind. Another 22% said they prioritise speed and innovation. A further 15% said they are cautious, while 5% said they have paused or restricted AI use.

Okta positioned the findings as evidence that governance frameworks need to keep pace with adoption, particularly as AI agents take on more tasks that were previously performed by humans.

"Australian organisations are embracing AI with real momentum, and that's a positive sign," said Mike Reddie, Vice President and Country Manager, Okta ANZ. "We are seeing a shift from early experimentation to responsible, strategic adoption. The next step is ensuring governance and security evolve at the same pace."

The company also said identity should sit at the centre of AI security and governance. It argued that organisations need to adapt traditional access controls for AI agents and automation. The poll results indicate most respondents already see identity as important in managing AI trust, but many have not yet aligned controls to non-human identities.

Okta cited its recent AI at Work 2025 and Customer Identity Trends reports, which it said found that 91% of organisations globally are already using or experimenting with AI agents. It also said fewer than 10% have a strategy to secure them, which implies a gap between experimentation and operational controls.

For security teams, the issue extends beyond policy to practical visibility and auditing. The poll's findings on detection and monitoring suggest many organisations lack mechanisms to confirm whether AI agents remain within their defined scope. The survey also points to a need for clearer accountability models, given the proportion of respondents reporting no single owner for AI security risk.

"Securing AI isn't about slowing progress; it's about starting with the right foundation. When identity is strong, trust follows, and that's what enables innovation to scale safely and sustainably," said Reddie.

Okta said organisations should apply similar discipline to securing AI agents as they do to human users, with verified identity, defined permissions and auditability forming core controls as AI deployment broadens across business functions.