SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Illustration multi agent robot ai systems interconnected tasks australia city

Australian report warns of emerging risks in multi-agent AI

Tue, 29th Jul 2025

A new report from the Gradient Institute, supported by the Australian Government Department of Industry, Science and Resources, has identified six distinct failure modes associated with the deployment of multiple AI agents within organisations, highlighting risks that extend beyond current single-agent systems.

The report details how Australian businesses are increasingly moving from using single AI agents to exploring networks of agents that communicate and coordinate with each other.

Such multi-agent systems are expected to become more common as organisations seek to automate complex workflows, such as employee onboarding handled by HR and IT AI agents, or customer service systems where each agent tackles specific types of customer enquiries.

This evolution towards collaborative AI architectures introduces new risks, according to Chief Scientist and report co-author Dr Tiberio Caetano. He stated, "The deployment of LLM-based multi-agent systems represents a fundamental shift in how organisations need to approach AI risk and governance. As businesses move towards adopting collaborative agent architectures to automate complex workflows and decision-making processes, the risk landscape transforms in ways that cannot be adequately addressed through traditional single-agent approaches."

Dr Caetano emphasised that traditional safety approaches designed for single AI agents are insufficient in the context of multi-agent systems.

"In short: A collection of safe agents does not make a safe collection of agents. As multi-agent systems become more prevalent, the need for risk analysis methods that account for agent interactions will only grow."

Emergence of new failure modes

Six new failure modes unique to multi-agent scenarios are identified in the report "Risk Analysis Tools for Governed LLM-based Multi-Agent Systems". These include:

  • Inconsistent performance of a single agent derailing complex multi-step processes
  • Cascading communication breakdowns as agents misstate or misinterpret messages
  • Shared blind spots, and repeated mistakes, when teams of agents use similar AI models
  • Groupthink dynamics where agents reinforce each other's errors
  • Coordination failures due to a lack of understanding of what peers know or need
  • Competing agents optimising for individual goals, undermining organisational objectives

Dr Caetano explained the gravity of these risks: "For example, organisations that run critical infrastructure, such as major technology companies, government departments, banks, energy providers and healthcare networks are likely to progressively deploy multi-agent systems within their organisational boundaries. If failures occur in these settings, the consequences could disrupt essential services for millions of people due to the scale and criticality of these operations."

Calls for new risk management techniques

The research concludes that existing software testing and single-agent risk management approaches do not adequately address the complexity presented by multiple AI agents working in tandem. Instead, it recommends progressive, stage-based risk management. This process should begin with controlled simulations to observe agent interactions, move through sandboxed testing environments, and culminate in carefully monitored pilot deployments. The aim is to detect failure modes early enough to ensure that any consequences remain manageable and reversible.

Gradient Institute's Head of Research Engineering and lead author of the report, Dr Alistair Reid, stressed the necessity of these developments in risk assessment.

"Just as a well-functioning team requires more than having competent individuals, reliable multi-agent AI systems need more than individually competent AI agents. Our report provides a toolkit for organisations to identify and assess key risks that emerge when multiple AI agents work together."

The report provides guidance for organisations on simulation techniques to observe interactions over time, red teaming to uncover vulnerabilities, and conceptual frameworks for understanding the limitations of AI measurement science. It also highlights the need for context-specific risk evaluation beyond the foundational identification and analysis processes covered in the document.

Increasing relevance for Australian industry

Bill Simpson-Young, CEO of Gradient Institute, remarked that this research aligns with the current stage of AI adoption in Australia.

"Australian businesses are accelerating their AI adoption, including greater use of AI agents. By providing practical tools grounded in rigorous science, we're enabling organisations to better understand the novel risks that emerge when AI agents work together – and how to start addressing them," said Mr Simpson-Young.
"The path forward isn't about avoiding this technology; it's about deploying it responsibly with awareness of both its potential and its pitfalls."

The findings are particularly aimed at organisations implementing AI agents within internal governance frameworks, giving them control over agent behaviour and deployment.

This focus is anticipated to play a crucial role as organisations expand their reliance on AI-driven automation.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X