SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ps andrew hartnett horizontal  1

AI inside the gates: Why AI is your next big cybersecurity risk

Thu, 20th Nov 2025

The next security breach might not come from a hacker at the gates, but from an AI agent already inside your network. Only it wasn't planted by a cybercriminal – you deployed it. Maybe it's summarizing meetings, triaging service tickets, or helping developers ship code faster. Whatever it's doing it's acting autonomously, retrieving resources, making decisions, and crossing trust boundaries with zero friction. That makes it both an asset and a liability. Because unlike human users, these AI agents don't clock in, don't request access through formal channels, and don't always come with a clear owner.

This is the new face of the insider threat. It's not malicious, but it's often overlooked. Most organizations now have hundreds, sometimes thousands, of non-human identities (NHIs) operating across cloud services, APIs, and internal systems. By some estimates, they can outnumber human users by 45:1 in DevOps environments alone, but the ratio across entire businesses is likely closer to 80:1. Many are created or deployed by developers or business users on the fly, without clear oversight or lifecycle controls. And as generative AI and large language models are increasingly embedded in day-to-day workflows, those NHIs are becoming more autonomous, more complex, and far harder to track. Because NHIs are so easily orphaned and forgotten about, we've entered a phase of identity sprawl where the most dangerous actor in your environment might not be a person, it might be a process. This has led the Identity Management Institute to declare that Identity and Access Management (IAM) frameworks that address both human and non-human identities as the new security standard.

A New Category of Risk: Non-Human Identities (NHIs)

The majority of organizations weren't built to manage identities that think for themselves. Service accounts and automation scripts have been around for decades, but AI agents are fundamentally different. More than just passive, background processes, they can initiate actions, escalate privileges, generate content, even impersonate users. And they're multiplying fast. NIH's are even more challenging as they don't go through HR onboarding, security training, or formal offboarding procedures. Once they're in, they tend to stay in – even when they become redundant.

The result is a widening visibility gap. Without a system of record for NHIs, organizations can't confidently answer basic questions: Who created this agent? What data can it access? Is it still needed? Even well-configured AI agents can become dangerous if granted excessive permissions or are left running unsupervised. And that's before you consider the risk of compromise, where a hijacked AI account could act on its elevated access invisibly without raising red flags. Last year, the FBI warned of cybercriminals using AI to drive their phishing and other social engineering campaigns – but with the proliferation of NHIs, those tools are now waiting for them on the inside. The sheer scale and volume of this "insider threat" means that manual management and tracking is simply no longer feasible.

The Definition of IAM Needs to Expand

The AI cybersecurity industry is projected to grow from $31.4 billion this year to just under $220 billion by 2034. That's a dizzying development, but here's something that's been true for hundreds of years: security starts with deciding who can get in and what they're allowed to do once they're there. The advent of AI hasn't changed that fundamental truth – it's simply raised the stakes.

Identity has always been the foundation of security, deciding who gets access, under what conditions, and for how long. But the definition of "who" has changed. Today, the most active users in your environment may not be people at all. That shift demands a corresponding shift in how IAM operates. Legacy frameworks built around human onboarding and offboarding can't keep up with the fluid, fast-moving nature of non-human access.

That's why modern IAM deployments must move beyond basic role-based access controls and start governing intent. It needs to handle autonomous systems that have blended into the network environment so effortlessly that they're eventually just forgotten about. So, we need to apply the same rigor to NHIs as we do to employees: assigning unique identities, enforcing least-privilege principles, maintaining real-time visibility, and capturing detailed audit trails. If an employee's role becomes redundant, you don't allow them to carry on coming to the office or let them keep a set of passwords to sensitive accounts – why should NHIs be treated any differently?

The Double-Edged Blade of AI

NHIs aside, AI is now being weaponized on both sides of the cybersecurity battlefield. On the offensive front, tools like WormGPT have made it trivial for attackers to generate convincing phishing emails, impersonate executives, and launch automated exploits at scale, with no coding skills required. The old hallmarks of a scam – broken grammar, generic intros, clumsy impersonations – are fading fast. With generative AI in the mix, even low-skill actors can now mount convincing, high-impact campaigns.

But AI is just as vital on the defensive side. In fact, 70% of cybersecurity professionals say AI helps detect threats that would otherwise go unnoticed. When paired with the right models, AI can flag behavioral anomalies in real time, reduce response lag, and surface complex attack patterns that are buried among other network noise. The real advantage comes from alignment: AI that's tuned to your environment, governed with intent, and trusted to act only within defined boundaries. That trust, however, as with NHIs, depends on robust identity controls. Without them, even your best defensive tools can become liabilities – or worse, points of catastrophic failure.

Security From the Inside Out

In 2025, IAM is the control plane for trust. It's one thing to know who or what is inside your environment, but it's another thing entirely to know what each identity is doing, why it has access, and whether or not that access is still appropriate. That means shifting IAM from a static gatekeeper to a dynamic enforcer of policy across human and non-human actors alike. Cybersecurity has long been concerned with prevention, and rightly so, but modern IAM also brings governance and accountability. It enforces least privilege by default, applies continuous monitoring, and supports full lifecycle governance for AI agents just as it would for employees.

The key thing to take away from this is that the next threat your business faces might not be the one banging at the door; it might be the one that's already inside, seemingly innocuous and hidden in plain sight. We're increasingly treating AI like a colleague, so let's start governing it like one.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X