AI rollout sees Australia ease identity security checks
Australian organisations are pressuring security teams to relax identity controls as they roll out artificial intelligence, according to new research from identity security firm Delinea.
Delinea's 2026 Identity Security Report found 90% of Australian respondents said their security teams face pressure to loosen identity controls as AI projects expand. At the same time, 40% said they were not confident governing AI-related identities and access.
Identity controls govern who or what can access systems and data. In many organisations, they also manage privileged access, which grants elevated permissions to change systems and security settings. Weak privileged-access governance has long been a target for attackers, and the report suggests AI adoption is expanding the pool of identities that require oversight.
Non-human accounts
The report focused on non-human identities, including service accounts, automated workloads, and accounts used by AI agents. These identities often run in the background and can hold broad permissions, particularly when configured for convenience or speed.
In Australia, 71% of organisations agreed that always-on access for non-human identities and AI agents increases risk. Even so, more than half still use standing privilege as the default model-persistent access that remains available rather than being granted only when required.
Respondents also reported limited visibility into what automated identities do once they receive privileged access. Four in five said they cannot always understand why a non-human identity performed a privileged action. A similar proportion said they cannot explain why an AI or machine identity needs a specific privilege.
These gaps complicate accountability and investigations. When an automated account makes a privileged change, teams need to determine whether it reflects a legitimate process, a misconfiguration, or malicious use of credentials.
Discovery gaps
The research pointed to persistent gaps in identity discovery and monitoring in AI-related environments. Nearly 90% of Australian respondents reported at least one identity visibility gap. The largest gap involved human identities in the general workforce, cited by 39%.
Delinea said this differs from the global pattern in the survey, where the largest gaps in most countries tend to involve machine identities and other non-human accounts. The survey covered more than 2,000 IT decision-makers globally who were actively using or piloting AI.
AI-related systems stood out as the environment where discovery gaps were most likely to persist. Respondents said gaps were most likely to remain in AI-related environments (52%), compared with legacy or on-premises systems (25%).
Confidence was also uneven. While 83% of Australian respondents said their identity security posture was ready for AI-driven automation, 40% said their identity governance around AI systems was deficient.
Validation practices
The research suggested Australian organisations validate non-human identities less frequently than global peers. One in 10 said they never validate their inventory of non-human identities against actual usage or behaviour, while 29% said they validate continuously.
Globally, 6% said they never validate non-human identity inventories against behavioural patterns, and 32% said they do so continuously.
Australia also ranked below the UK and US in the ability to explain what happened when non-human identities take privileged actions. In Australia, 59% said they are always or often able to explain the scenario, compared with 68% in the UK and 69% in the US.
Drivers of risk
Australian respondents said AI expansion was a leading factor increasing non-human identity risk over the past 12 months. Some 41% cited AI expansion as a top factor, ahead of growth in cloud-native workloads (28%) and increased automation and CI/CD velocity (23%).
The report also found that 51% of Australian organisations lack viable alternatives to standing privileged access for non-human identities and AI agents. That reliance increases the chance automated identities retain persistent permissions that can be reused or abused.
Art Gilliland, CEO of Delinea, linked the findings to the pace of AI deployment and the growth in automated identities within organisations.
"The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk," Gilliland said. "As AI agents multiply across enterprise environments, these identities often have the least oversight. The organisations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity."
Delinea said the findings point to a widening gap in Australia between AI ambition and identity governance. As deployments increase the number of human and non-human identities interacting with sensitive systems, organisations face a growing burden to discover identities, manage privileges, and audit activity across these environments.