SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ps rohit aradhya barracuda

Barracuda’s frontline predictions 2026: The battle for reality and control in a world of agentic AI

Fri, 21st Nov 2025

The power and potential of agentic AI - adaptive, automated and independent - dominated security conversations during 2025. Barracuda's VP of Engineering and Managing Director explains what he expects from agentic AI in 2026 and what this means for cybersecurity.

Multiple AI agents will work in tandem on actions to achieve a particular objective, with minimal or no human supervision. This opens the possibility of hijacking or poisoning agent-to-agent interactions, using attacker-controlled information to manipulate the coordinated actions. The lack of humans in the loop could delay detection and mitigation.

The evolution of agentic AI will lead to a rise in adaptive polymorphic malware - malware that can analyse the victim's environment and security tools and autonomously rewrite or alter its own code and behaviour to bypass signature-based and behavioural defences in real time. 

We'll see multiple agents working in tandem on actions to achieve a particular objective, with minimal or no human supervision. This opens the possibility of hijacking or poisoning agent-to-agent interactions, using attacker-controlled information to manipulate the coordinated actions. The lack of humans in the loop could delay detection and mitigation.  

We will also see a significant rise in the misuse of public-facing application programming interfaces (APIs), API gateways, agentic service APIs, and chatbot-based user interfaces. It will become critical for API lifecycle management to cope with dynamic API handling as agentic tools dynamically create and destroy API interfaces between agents and with users to provide and consume services. 

What should organisations do to protect their own agentic AI implementations?

As organisations start to implement agentic AI, a range of AI-specific security controls will be needed. These include robust identity and access management (IAM) for AI agents. Every agent should be treated as a standalone entity with associated users, groups and resource access privileges. Organisations will need to extend their zero-trust framework to AI agents and tools; to verify and validate every request and action an agent attempt, regardless of its previous behaviour. They will need to increase the focus on monitoring the operational behaviour of systems, so that any deviations are detected quickly.  Agent-to-agent communications need to be secured, properly authenticated, encrypted and logged for traceability and explainability and to protect against the kind of attacks designed to poison communications. Last, but not least, organizations need to ensure they understand and comply with standards like the NIST AI Risk Management Framework.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X