SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Apac server map ai security engineers vs shadowy uncontrolled bots

Okta warns of AI security gaps across Asia Pacific

Fri, 30th Jan 2026

Okta has reported a widening gap across Asia Pacific and Japan between rapid adoption of artificial intelligence and the governance and identity controls that organisations use to manage it.

Polling conducted in Australia, Singapore and Japan points to uncertainty over accountability for AI-related security risks and low confidence in monitoring how AI agents behave once deployed. Okta's data also indicates that many identity and access management programmes still focus on human users, even as organisations increase the use of non-human identities such as AI agents, bots and service accounts.

The findings reflect a shift in security priorities. More systems now run autonomously and interact directly with data and applications. That changes how organisations assign permissions, track actions, and investigate incidents.

Accountability gaps

In Australia, 41% of respondents said there is no single person or team accountable for managing AI security. Only 10% said their identity systems were fully equipped to secure non-human identities such as AI agents, bots and service accounts. A further 52% said their identity systems were partially equipped.

Respondents in Singapore and Japan also reported unclear ownership. The poll recorded uncertainty levels of 25% in Singapore and 29% in Japan. The results suggest that responsibility often sits across multiple functions, or it remains undefined.

Okta linked fragmented ownership with the rise of "shadow AI", which it described as the use of unapproved or unsupervised AI tools inside organisations. Shadow AI ranked as the top security concern in Australia and Singapore. It accounted for 35% of responses in Australia and 33% in Singapore.

In Japan, respondents placed a higher emphasis on data leakage. It attracted 36% of responses and ranked as the primary concern. The poll indicated that unapproved or unsanctioned AI agents also featured prominently among Japanese respondents.

Monitoring shortfalls

The poll results also highlight limited confidence in detecting when AI agents operate outside their intended scope. Fewer than one-third of respondents across the region said they felt confident in their ability to detect that behaviour.

Confidence levels were particularly low in Australia at 18% and in Japan at 8%. The results point to gaps in monitoring tools and processes for autonomous systems, especially in environments that now include a growing population of non-human identities.

Organisations increasingly deploy AI systems that access corporate data stores, initiate workflows, and participate in decision-making processes. That puts pressure on security teams to maintain oversight over what these systems do and what they can reach.

Identity readiness

Across Australia, Singapore and Japan, fewer than 10% of respondents said their identity systems are fully equipped to manage and secure non-human identities such as AI agents, bots and service accounts. Most respondents described their identity systems as only partially prepared.

This creates structural issues for access control and auditability. AI systems often require credentials and permissions to interact with applications and data. Many identity systems were designed around employee and contractor access. The poll suggested that AI agents may inherit excessive access and fall outside established governance processes. It also pointed to limited audit trails in some environments.

The survey results suggest that the issue now reaches beyond technical teams. Boards in all three markets showed awareness of AI-related security risks, although levels of engagement varied.

Board engagement

In Australia, respondents reported that 70% of boards were aware of AI-related security risks. Only 28% were considered fully engaged. In Singapore, board awareness stood at 50%, with 31% fully engaged.

Japan recorded the highest reported levels of board awareness at 78% and engagement at 43%. Respondents attributed that position to regulatory expectations and a strong organisational focus on data integrity.

The gap between awareness and engagement indicates uneven governance maturity across the region. The poll results suggest that organisations recognise AI-related risks, but do not yet apply consistent leadership oversight and control frameworks.

The findings also underline a change in identity and access management priorities. Organisations increasingly need to manage access not only for people but for autonomous systems that act on their behalf or operate independently. That includes determining how credentials get issued, how permissions get reviewed, and how activity gets recorded.

The polling took place during Okta's Oktane on the Road event series in Sydney, Melbourne, Tokyo and Singapore. Okta conducted the live, interactive polls in October and November 2025.