SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Enterprise it security control room ai agents zero trust gateways
Fri, 20th Mar 2026

Token Security has launched an intent-based security model for AI agents, aiming to move away from static permissions and inherited human roles toward controls tied to what an autonomous system is meant to do.

The New York-based firm says the approach aligns an agent's access rights with its stated and observed purpose, using identity controls as the enforcement layer across enterprise environments.

Companies are starting to deploy autonomous agents across business applications, cloud services and internal infrastructure. These agents often operate using service accounts, API credentials and cloud roles, putting identity and access management at the centre of how organisations govern machine-driven activity.

Traditional access controls assume behaviour stays within predictable bounds once permissions are set. AI agents can behave differently even when they share the same authorisations, making it harder for security teams to rely on static role-based access models, Token Security argues.

Intent as control

Token Security describes its model as "intent-based AI agent security". It treats intent as a key security attribute alongside identity, and applies least-privilege access based on what an agent is supposed to accomplish.

In the company's view, prompt filtering and model guardrails do not address the full range of risks when agents interact directly with enterprise systems. Those risks include over-privileged credentials, unmonitored actions across SaaS tools, and unclear ownership and authorisation.

"Prompt filtering and guardrails were not designed to fully contain the security risks introduced by autonomous AI agents," said Itamar Apelblat, co-founder and CEO of Token Security. "With our intent-based approach, the Token Security platform understands what AI agents are supposed to do and ensures they only have the permissions required to achieve their specified goals. As soon as their intent changes or they demonstrate risky behavior, our solution automatically intervenes to neutralize the threat."

The model is built around letting security teams define boundaries for an agent based on scope and purpose, then enforcing those boundaries with identity-linked controls. Token Security says it evaluates both "declared" and "observed" intent and adjusts authorisation decisions accordingly.

Five elements

Token Security says its platform operationalises the model through five functions: continuous discovery of AI agents, their owners and access; analysis of an agent's stated and observed intent; dynamic creation and enforcement of least-privilege access policies aligned to that intent; flagging and constraining actions outside intent boundaries; and lifecycle governance to help prevent access drift and orphaned agents.

This focus reflects a common concern in identity security: credentials and permissions tend to accumulate over time, especially in environments with automation, short-lived projects and frequent changes to cloud roles and API keys. As agents are created and modified rapidly, security teams may struggle to keep inventories current and remove access when it is no longer needed.

"Intent is the missing dimension in AI agent security, since security teams must understand what an agent is designed to accomplish before they can safely govern what it can access," said Ido Shlomo. "AI agents shouldn't inherit the full permissions of the humans who create them. When they do, organizations lose visibility and control over what those systems can access and execute. By understanding what an agent is designed to do and enforcing access based on its stated purpose, organizations can keep autonomous systems operating within safe boundaries."

RSA focus

Token Security will demonstrate the technology at the RSA Conference as a finalist in the RSAC 2026 Innovation Sandbox, and will also exhibit in the South Hall.

The launch comes as security teams assess how to manage agentic AI that can trigger actions across multiple systems, rather than only generating text. The shift has led vendors and practitioners to revisit governance models and consider how identity, monitoring and policy enforcement can adapt to autonomous behaviour.

Token Security says the intent-based AI agent security features are available immediately as part of its platform. The company is backed by Notable Capital, Crosspoint Capital and TLV Partners.