Check Point launches AI Defence Plane for enterprise
Check Point has launched its AI Defence Plane for enterprise AI security, aimed at organisations managing AI systems across employee tools, applications and autonomous agents.
The platform is designed as a unified control plane for governing how AI is connected, deployed and operated across a business. It targets risks that arise as AI systems move beyond content generation into tasks that involve accessing data, invoking tools and taking actions inside enterprise environments.
That shift moves the security focus from model outputs alone to the behaviour of AI systems in live settings. The attack surface now includes agentic workflows, delegated actions, non-human access and shadow agents operating within business systems.
The platform draws on Check Point's existing AI Security platform, along with technology from ThreatCloud AI and the Lakera and Cyata acquisitions. It combines discovery, governance, observability, runtime control and continuous validation across the AI execution lifecycle.
At the centre of the system is what Check Point describes as an AI-native security engine. It makes real-time decisions using analysis of millions of AI interactions, adversarial testing and threat intelligence, with response times of less than 50 milliseconds across more than 100 languages.
Three modules
The launch includes three main modules, each at a different stage of availability.
Workforce AI Security is available immediately and focuses on how employees use AI-powered applications. It provides visibility, governance and runtime safeguards, while enforcing policy in real time across approved and unapproved AI tools.
AI Application & Agent Security is also available immediately. It is designed to discover where AI is used across an organisation, assess what data and tools those systems can access, evaluate their behaviour and govern the permissions and trust relationships tied to agentic operations.
AI Red Teaming is in limited release. This part of the platform provides continuous adversarial testing of prompts, reasoning paths, workflows, tool use and agent behaviour to identify weaknesses before systems are deployed more widely.
The announcement reflects a broader industry effort to secure AI systems in production rather than relying only on development-stage controls or model guardrails. In practice, that means monitoring and constraining how AI applications and agents behave when linked to internal systems, sensitive data and business processes.
David Haber outlined that shift in the company's view of the market. "The enterprise is entering the agentic era. AI is no longer limited to generating content. It is beginning to access systems, use tools, chain actions, and operate with increasing autonomy. That changes the security model," said David Haber, VP, AI Security, Check Point.
He added: "The challenge is no longer just what AI says, but what AI can do. Organisations need more than model safety. They need runtime control over how AI behaves inside real environments. The AI Defence Plane provides that control across employees, applications, and AI agents."
Check Point argues that governance and enforcement need to sit at runtime, where business risk emerges as AI interacts with operational systems. That view is gaining traction as companies test software agents that can query infrastructure, trigger workflows and work with internal data without direct human intervention at each step.
The red teaming element also drew support from outside Check Point. "Red teaming has become essential for agentic systems," said George Davis, Product Leader, Sierra. "When AI can query infrastructure, trigger workflows, and interact with sensitive data, the risk is no longer theoretical. Organisations need continuous testing to understand how these systems can be manipulated, where controls break down, and how resilient they are in production."
The AI Defence Plane sits within Check Point's broader AI Security portfolio. The workforce and application modules are available now, while the red teaming module remains in limited release.