Microsoft warns of AI agent risks in Cyber Pulse brief
Microsoft has launched Cyber Pulse, a digital briefing for business leaders. Its first edition focuses on the security and governance challenges emerging as organisations deploy AI agents at scale.
The briefing argues that companies are moving quickly from experimentation to broader use of human and AI “agent” teams. It outlines a security approach based on agent visibility, governance across business and technology functions, and Zero Trust principles.
New figures cited in the briefing point to gaps in generative AI controls. Microsoft's 2026 Security Data Index found that 53% of Australian organisations lack GenAI-specific security controls, compared with 47% globally. These include policies and monitoring to detect unauthorised agents.
Cyber Pulse also projects rapid growth in autonomous agents. Microsoft expects more than 1.3 billion autonomous AI agents to be in operation by 2028, linking the increase to rising machine-to-machine activity inside organisations.
Agent adoption
The briefing describes broad uptake of agent development across Microsoft platforms, including Fabric, Foundry, Copilot Studio and Agent Builder. It also says agent building is no longer limited to technical roles, with employees across business functions creating and using agents in daily work.
Data from Microsoft's environment shows more than 80% of the Fortune 500 is deploying active agents built with low-code and no-code tools. While wider access to agent-building tools is accelerating adoption, it is also making it harder for central IT and security teams to maintain oversight.
Cyber Pulse frames this as both a visibility and security issue, warning that agents can scale faster than some organisations can track them, creating business risk.
Microsoft also reports rising adoption across regions and industries, naming financial services, manufacturing and retail as leading sectors. Financial services accounts for about 11% of all active agents globally, manufacturing 13%, and retail 9%.
Shadow AI
Alongside sanctioned deployments, the briefing highlights growth in unsanctioned agents. It notes that some agents are approved by IT and others are not, and that this mix creates compliance and operational challenges.
The document links the spread of unsanctioned tools to workforce behaviour, citing a multinational survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group. It found that 29% of employees have already used unsanctioned AI agents for work tasks.
Cyber Pulse also warns that attackers could exploit agents' access and privileges. It uses the term “double agents” for AI agents that act against an organisation's interests due to excessive permissions, flawed instructions, manipulation, or exposure to untrusted inputs.
Emerging threats
The briefing points to security scenarios it says are already appearing in the field and in internal testing. Microsoft's Defender team recently identified a fraudulent campaign involving multiple actors using an AI attack technique it calls “memory poisoning”, which it says manipulates an AI assistant's memory persistently and influences future responses.
It also cites research by Microsoft's AI Red Team in a secure test environment. Researchers documented cases where agents were misled by deceptive interface elements, including harmful instructions embedded in everyday content. The team also found that manipulated task framing could subtly redirect an agent's reasoning.
These examples are presented as evidence that agent management needs systematic controls rather than ad hoc safeguards. The briefing argues that enterprises need full observability and management of all agents that interact with their environment, with controls enforced centrally.
Zero Trust
Cyber Pulse sets out an approach based on Zero Trust principles applied to AI agents as well as human users. It lists least-privilege access, explicit verification, and an assumption that compromise can occur as the foundation for agent security.
The briefing places “observability” at the centre of its guidance, defining it as a control plane spanning IT, security, developers and AI teams. It says organisations should be able to identify which agents exist, who owns them, what systems and data they touch, and how they behave.
Microsoft breaks observability into five areas: a centralised agent registry; access control aligned with identity and policy; real-time visualisation through dashboards and telemetry; interoperability across platforms and ecosystems under a consistent governance model; and protections that detect compromised or misaligned agents.
The publication frames this as a leadership issue as much as a technical one, noting that ungoverned agents can affect security, business continuity and reputation. It adds that accountability sits with executive leadership alongside the Chief Information Security Officer.
“Organisations urgently need effective governance and security to safely adopt agents, promote innovation, and reduce risk,” Microsoft said.