SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Moody soc night cybersecurity analyst ai threat maps hidden agents

Coalfire launches AI threat hunting for shadow agents

Fri, 20th Mar 2026

Coalfire has launched an AI threat hunting service through its DivisionHex practice, as organisations report rising security incidents linked to generative and agentic AI tools.

The service targets three areas that security teams often struggle to detect in daily operations: shadow AI adopted without oversight, compromised AI agents, and what Coalfire calls "agentic insider risk". Coalfire defines agentic insider risk as situations where AI systems act like insiders because they have access to sensitive data and operational systems.

New survey data from Richmond Advisory Group points to a gap between AI roll-outs and security governance. Nearly 90% of surveyed organisations reported an AI-driven incident in the past 18 months, and 63% said their security teams have a primary mandate to use AI to reduce costs.

The offering extends threat-hunting methods into AI deployments and surrounding workflows. It looks for evidence that AI systems create new attack paths or operate beyond intended permissions, and for signs that attackers already present are using AI systems to gain broader access or maintain persistence.

Privileged actors

Agentic AI systems are increasingly connected to enterprise tools and data stores, and many are configured to take actions rather than only generate text. When controls fail, the risk rises because an agent may be able to execute tasks quickly and at scale.

Neil Wyler, Vice President of Defensive Services at Coalfire, said: "AI agents are quickly becoming highly privileged actors inside corporate environments. They can access sensitive data, perform automated tasks, and interact with core systems. If those agents are manipulated, compromised or misconfigured, they don't just behave like a malicious insider - they become one, exfiltrating data or enabling further compromise without anyone realizing it."

Coalfire's concept of agentic insider risk centres on the combination of high privilege and indirect control. Agents may follow instructions from users, upstream systems, and integrated applications, and may also act on content pulled from emails, documents, tickets, and chat logs. When safeguards are weak, these inputs can become routes for manipulation.

What it hunts

DivisionHex teams conduct investigative reviews across enterprise environments, including identifying shadow AI introduced by employees without security oversight and unauthorised AI integrations that use corporate credentials or sensitive data.

The service also assesses whether AI agents access data or systems beyond their intended scope by mapping permissions and looking for activity that does not align with expected workflows. It also examines potential manipulation of AI models or agents, along with signs that threat actors are using AI systems to deepen access.

Coalfire lists several ways agentic AI systems can be manipulated, including prompt injection and data poisoning. Other risks include unauthorised credential use and privilege escalation through automation. It also flags external influence that alters AI behaviour. According to the announcement, these methods can lead to unintended access to sensitive information or unauthorised actions.

Threat hunting has traditionally focused on endpoints, identity systems, and network activity. Applying those practices to AI adds technical and operational challenges: teams must track which models are in use, how they connect to other systems, what data they can access, and what actions they can trigger. Depending on system design, teams may also need records of prompts, agent decisions, tool calls, and downstream effects.

Governance pressure

Richmond Advisory Group's findings suggest many security leaders face cost-focused directives as AI adoption spreads across business workflows. That combination can reduce tolerance for controls that add friction, even when systems touch sensitive data or core processes.

Christina Richmond, Principal Analyst at Richmond Advisory Group, said: "AI adoption in the workplace is moving faster than most organizations' ability to monitor and govern it. Without visibility into how employees use generative and agentic AI tools, companies risk creating a new wave of shadow AI and potentially unknown identities. Adoption without governance and monitoring introduces unexpected operational costs. Employing proactive AI threat hunting ensures organizations can harness AI safely while avoiding the downstream risks that come from unmanaged use."

The AI threat hunting service is available immediately through DivisionHex. It can be purchased as a standalone engagement or integrated into broader security assessments, and is positioned as a way to improve visibility into AI use and provide remediation guidance once risks are identified.

Coalfire plans to deliver the service through its existing consulting and assessment operations, with DivisionHex teams reviewing AI usage patterns and agent behaviour inside enterprise environments.