CIS partners to create new AI agent security guidance for firms
The Centre for Internet Security has entered into a partnership with Astrix Security and Cequence Security to produce dedicated guidance for organisations adopting artificial intelligence technologies and agent-based systems.
The initiative aims to address new cybersecurity risks arising from autonomous operations, expanded tool connectivity, and data handling in AI environments.
AI security guidance
The agreement focuses initially on the production of two companion guides. The first will provide security direction for the lifecycle management of AI agent environments. The second is devoted to Model Context Protocol (MCP) environments, with an emphasis on potential vulnerabilities created when MCP agents, tools, and registries interact within enterprise IT systems.
CIS will adapt its widely recognised Critical Security Controls framework for these new contexts. The effort is expected to supply organisations with clear, actionable safeguards designed for the unique technical landscape of agentic AI systems, including guidance around the new types of threats these introduce.
MCP environment risks
Security experts involved in the partnership have identified a number of risks specific to MCP environments. These include unregulated distribution of credentials, uncontrolled local code execution, unsanctioned third-party linkages, and potential for uncontrolled data exchanges between AI models and integrated tools. The guides will offer mitigation strategies and standards for control and oversight.
"AI presents both tremendous opportunities and significant risks," said Curtis Dukes, Executive Vice President and General Manager of Security Best Practises, Centre for Internet Security. "By partnering with Astrix and Cequence, we are ensuring that organisations have the tools they need to adopt AI responsibly and securely."
Enterprise adoption support
Astrix Security will focus on securing both AI agents and the Non-Human Identities (NHIs) such as API keys, service accounts, and OAuth tokens that facilitate their operation. The company will contribute to discovery methodologies and provide insight into the governance needed for responsible deployment at enterprise scale.
Jonathan Sander, Field CTO of Astrix Security, said, "AI agents and the non-human identities that power them bring great potential but also new risks. Our focus is helping enterprises discover, secure, and deploy AI agents responsibly, with the confidence to scale. Through this partnership, we're providing clear, practical guidance to keep AI ecosystems safe so organisations can innovate with confidence."
API and application perspective
Cequence Security's work will leverage its expertise in enterprise application security and API protection, including the specific visibility and governance needs for agentic AI implementation. The company will add controls to manage the scope of what agents can access and do within organisational systems.
Ameya Talwalkar, CEO of Cequence Security, said, "As organiations embrace agentic AI, trust hinges on visibility, governance, and control over what those agents can see and do to your applications and data. Security is strongest through collaboration, and this partnership gives organisations clear guidance to adopt AI safely and securely."
Resources and alignment
The completed guidance is scheduled for public release in early 2026, with supporting resources developed jointly by the three organisations. These resources will include workshops and webinars to explain the frameworks and facilitate their implementation.
The partnership seeks to align enterprises, vendors, and security leaders around a shared linguistic and procedural framework for the secure deployment and ongoing management of AI systems.