Unchecked and unbound: How Australian security teams can mitigate Agentic AI chaos
Security teams across Australia are well-versed in insider threats – malicious actors within an organisation who have access to internal systems through their roles as employees, contractors, or partners.
But a new insider threat is emerging that isn't human and not inherently malicious. While agentic AI systems act on behalf of users, their autonomy also threatens to displace human decision-making and reveal weaknesses in traditional authorisation frameworks.
The rise of agentic AI presents a growing risk to authorisation systems in today's SaaS environments. However, with a proactive and informed approach, security and IT leaders in Australia can get ahead of the risks. The first step to preventing chaos is understanding where these autonomous agents pose the greatest threat to their authorisation systems - and why.
Why AuthZ falls short for AI agents
Authorisation, or AuthZ, is about managing users' access to resources: it ensures that you only perform authorised actions.
However, it's worth noting that AuthZ systems don't necessarily stop everything users might attempt to do. Most existing AuthZ systems assume external factors, such as laws, social censure risk, or habit, will limit human misbehavior.
As a result, it isn't typically a problem when an AuthZ system over-provisions access. Over-provisioning happens all of the time. For instance, when someone joins a company, it's easier during onboarding to copy an existing set of roles to their account rather than think carefully about what they need access to. Until now, this approach has not typically caused significant problems because most people are unlikely to exploit over-provisioned access. Contextually, they know they might lose their job, lose trust, or potentially face legal consequences if they breach company guidelines.
Agentic AI systems have no such compunctions.
The agents of chaos are here
Agentic AI systems are collections of agents working together to accomplish a given task with relative autonomy. Their design enables them to discover solutions and optimise for efficiency.
The result is that AI agents are non-deterministic and may behave in unexpected ways when accomplishing tasks, especially when systems interoperate and become more complex. As AI agents seek to perform their tasks efficiently, they will invent workflows and solutions that no human ever considered (or would consider). This will produce remarkable new ways of solving problems, and will inevitably test the limits of what's allowable.
The emergent behaviours of AI agents, by definition, exceed the scope of any rules-based governance because we base those rules on what we expect humans to do. By creating agents capable of discovering their own ways of working, we're opening the door to agents doing things humans have never anticipated.
As a result, agents acting on behalf of humans may start to expose those users' over-provisioned access rights and roles. Uninhibited by social norms that keep humans in line, AI agents may cause detrimental consequences for businesses.
For instance, an AI charged with creating a solution to a user checkout flow might start creating code to optimise the checkout process. You don't want this agent to deploy code to your production environment that takes down services in AWS or Google Cloud that it deems irrelevant to its mission (but that are essential to other aspects of the business), or otherwise introduce instability into what is a relatively stable set of systems.
Proper governance mitigates agentic chaos
Security teams can forestall the chaos that agentic AI may cause within their AuthZ systems by proactively embracing emerging best practices. Responsible governance will make all the difference, and organisations can start by focusing on a few key areas:
- Composite identities: Currently, AuthN and AuthZ systems cannot distinguish between human users and AI agents. When AI agents perform actions, they act on behalf of human users or use an identity assigned to them based on a human-centric AuthN and AuthZ system. That complicates the process of answering formerly simple questions, like: Who authored this code? Who initiated this merge request? Who created this Git commit? It also prompts new questions, such as: Who told the AI agent to generate this code? What context did the agent need to build it? What resources did the AI have access to?
Composite identities provide a means to answer these questions. A composite identity links an AI agent's identity with the human user directing it. As a result, when an AI agent attempts to access a resource, you can authenticate and authorise the agent and link it to the responsible human user.
- Comprehensive monitoring frameworks: Operations, development, and security teams require effective methods to monitor the activities of AI agents across multiple workflows, processes, and systems. It's not enough to know what an agent is doing in your codebase, for instance - you also need to be able to monitor its activity in the staging and production environments, in associated databases, and in any applications it might have access to.
It's possible to imagine a world in which organisations use Autonomous Resource Information Systems (ARIS) that parallel our existing Human Resource Information Systems (HRIS), enabling us to maintain profiles of autonomous agents, document their capabilities and specialisations, and manage their operational boundaries. We can see the beginnings of such technologies in LLM data management systems, such as Knostic, but this is just the start.
- Transparency and accountability: With or without sophisticated monitoring frameworks, organisations and their employees need to be transparent about when they are using AI. They need to establish clear accountability structures for autonomous AI agents. Humans need to regularly review the actions and outputs of agents, and more importantly, someone needs to be accountable should the agent overstep its bounds.
Avoiding chaos with responsible agent deployment
AI agents will introduce a degree of unpredictability into enterprise environments, unlocking innovation while also testing the limits of existing AuthZ systems. But they don't have to become agents of chaos.
As with previous technology shifts, such as the move to the cloud, emerging tools often outpace existing security frameworks. The key to preventing the chaos is balance: embracing innovation while establishing the right governance frameworks. By adopting best practices early, Australian organisations can deploy agentic AI responsibly and mitigate risk.