SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ai cyber threats hooded figure machine identity chaos 2026

CrowdStrike warns AI will redefine cyber threats by 2026

Wed, 10th Dec 2025

CrowdStrike expects artificial intelligence to reshape the cyber threat landscape in 2026, predicting a surge in prompt-injection attacks, AI-driven zero-day discovery, and a rapid expansion of non-human identities across enterprises.

Senior leaders at the security company say that security operations teams will need to re-architect how they monitor, orchestrate and control both human and machine activity if they are to keep pace with adversaries using AI at scale.

Elia Zaitsev, Chief Technology Officer, CrowdStrike, said, "Just as phishing defined the email era, prompt injection is defining the AI era. Adversaries are embedding hidden instructions to override safeguards, hijack agents, steal data, and manipulate models - turning the AI interaction layer into the new attack surface and prompts into the new malware."

Prompt injection

Zaitsev said organisations should prepare for AI systems themselves to become a primary attack surface, as businesses embed models and agents into workflows, customer interfaces and internal tools.

He expects dedicated monitoring and control to emerge as a core security requirement for AI deployments, much as endpoint detection and response (EDR) became standard for laptops and servers.

"In 2026, AI Detection and Response (AIDR) will become as essential as EDR, with organisations requiring real-time visibility into prompts, responses, agent actions, and tool calls to contain AI abuse before it spreads, ensuring AI drives innovation, not risk," said Zaitsev.

Agentic SOCs

CrowdStrike forecasts that AI agents will be used not only by attackers but also inside security operations centres (SOCs) to speed up detection, investigation and response.

"Adversaries are already using AI to move faster than humanly possible - and legacy SOCs can't keep up. In 2026, defenders will evolve from alert handlers to orchestrators of the agentic SOC: intelligent agents that reason, decide, and act across the security lifecycle at machine speed, always under human command. This is the model that will reshape the balance between adversaries and defenders, accelerating outcomes and giving humans the time and clarity to focus on strategy, judgment, and impact," said Zaitsev.

He set out several conditions that he believes must be in place before this model can function reliably at scale.

"The success of this evolution will be dependant on the following prerequisites: Providing both agents and analysts complete environmental context with the ability to action any signal immediately. An agentic workforce of mission-ready agents trained on years of expert SOC decisions to automate high-friction tasks with speed and precision. Benchmarks and validation to prove the effectiveness of agents. The ability for organisations to build and customise their own agents to satisfy unique needs.Orchestrating agent-to-agent and analyst-to-agent collaboration within one coordinated system guided by human expertise. Security analysts are not going away - they're being elevated by a fleet of agents that work at machine speed," said Zaitsev.

Non-human identities

Zaitsev also expects identity management to shift from a primarily human-centric discipline to one dominated by machine and agent identities with wide-ranging access rights.

"In 2026, AI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities. Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets, making them the most powerful and most dangerous entities in your environment."

He said this change would require security teams to track and control machine actions at a granular level and to maintain clear lines of accountability back to specific staff and teams.

"Identity security built for humans won't survive this shift. Security teams will need real-time visibility, instant containment, and the ability to trace every agent action back to the human who created it. When an AI agent wires money to the wrong account or leaks intellectual property, "the AI did it" won't be an acceptable answer. This is the era where identity security means protecting entities that don't have a pulse," said Zaitsev.

Zero-day surge

Adam Meyers, Senior Vice President of Counter Adversary Operations at CrowdStrike, expects AI to accelerate both software creation and vulnerability discovery, leading to more zero-day flaws being identified and exploited.

"In 2026, we'll likely see an explosion of zero-day vulnerabilities driven by AI. As AI accelerates code generation and software development, it's also becoming ideally suited to finding flaws in software. There are two primary ways to identify these vulnerabilities: targeted analysis, which is resource-intensive and typically requires a human in the loop. The other is commonly called fuzzing and involves automation to identify flaws. GenAI is a game-changer for the latter. AI can optimise fuzzing methodologies and analyse crash reports at scale, rapidly surfacing exploitable flaws."

Meyers said early activity suggests that better-resourced attackers are already experimenting with AI-assisted methods to locate and weaponise vulnerabilities more cheaply and at greater speed.

"Early indicators suggest advanced adversaries are already investing in this research, driving down the cost of discovering and weaponising vulnerabilities. These exploits are the keys that adversaries use to gain initial access to their targets. The defenders who succeed will be those using AI with the same speed and precision: detecting, patching, and proactively hunting for zero-days as fast as they're found," said Meyers.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X