Delinea warns AI adoption is widening identity gaps
Delinea has published research on identity security risks linked to AI adoption, based on a global survey of more than 2,000 IT decision-makers who are using or piloting AI.
The findings suggest many organisations are loosening identity controls as they expand AI projects, even without clear oversight of machine and AI-linked accounts. According to the survey, 90% of respondents said security teams were under pressure to relax identity controls to support AI initiatives.
Nearly 90% also reported at least one identity visibility gap. The largest involved machine and non-human identities, including accounts used by AI agents. Respondents said these gaps were more likely to persist in AI-related environments than in legacy or on-premises systems.
The report highlights a broader challenge for companies introducing more automation into daily operations. As AI agents and other software-driven processes gain access to systems and data, security teams must manage a growing number of identities that do not belong to human users.
Some 42% of organisations said AI expansion had been one of the main factors increasing non-human identity risk over the past 12 months. That was well ahead of increased automation and CI/CD velocity, cited by 26%, and growth in cloud-native workloads, also cited by 26%.
Visibility gaps
Traceability was another concern. Four in five respondents said they could not always determine why a non-human identity performed a privileged action, pointing to limited visibility into how automated identities use elevated permissions.
Standing privileged access also remains common. Some 59% of organisations said they lacked viable alternatives to standing privileged access for non-human identities and AI agents, leaving automated accounts with persistent permissions that could be misused or exploited.
These findings also reveal a gap between confidence and operational readiness. While 87% of respondents said their identity security posture was ready to support AI-driven automation, 46% also said their identity governance around AI systems was deficient.
Delinea described this as an AI security confidence paradox. Organisations were twice as likely to rate their ability to discover and govern identities in AI environments poorly than they were in legacy systems.
Confidence gap
The survey suggests confidence in detection is not always backed by verification. Although 82% of respondents said they were confident in discovering non-human identities with access to production systems, fewer than one in three said they validate non-human identity or AI agent activity in real time to confirm those discovery processes are working.
This gap matters because identity controls are central to access management. If organisations cannot identify every account operating across production environments, they may also struggle to explain actions, restrict privileges, or investigate incidents involving automated systems.
The research combines survey responses with input from the Delinea Labs research team on cyber incidents involving modern identity environments. It focuses on organisations already using AI tools or piloting them, placing the emphasis on risks emerging during active adoption rather than early experimentation.
In practice, non-human identities can include service accounts, automated scripts, machine credentials, and accounts used by AI agents to access internal resources. As businesses roll out more AI-driven workflows, the number of these identities can rise quickly across cloud platforms, internal applications, and infrastructure.
That growth is putting new pressure on governance processes that were often designed around human employees and administrators. Security teams may be able to apply approval workflows, session controls, and activity monitoring to staff accounts, but equivalent oversight can be harder to maintain when machine identities are created at scale and operate continuously.
"The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk," said Art Gilliland, CEO of Delinea.
He added that the problem becomes more acute as AI agents spread across business systems.
"As AI agents multiply across enterprise environments, these identities often have the least oversight. The organizations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity," Gilliland said.
The report adds to a growing body of industry research on how AI adoption is changing cyber risk, particularly around access rights, accountability, and auditability. Rather than focusing only on model safety or data leakage, the findings point to a more basic operational problem: many organisations still do not have a complete view of who or what has access to key systems.
For companies moving AI tools into production, that creates a practical challenge for security and IT teams. They must identify and govern a broader range of privileged identities while resisting pressure to weaken controls in the name of speed.