
Pangea unveils AI security tools to combat growing threats
Pangea has introduced a comprehensive suite of AI security tools designed to protect AI applications from threats such as prompt injection and data leaks.
The newly launched AI Guard and Prompt Guard products aim to secure AI, as part of Pangea's wider offerings that include AI Access Control and AI Visibility. This suite of tools is intended to address risks associated with the integration of large language models (LLMs) with user data and sensitive information.
"As companies race to build and deploy AI apps via RAG and agentic frameworks, integrating LLMs with users and sensitive data introduces substantial security risks," stated Oliver Friedrichs, CEO and Founder of Pangea. "New attacks surface daily, requiring countermeasures to be rolled out equally fast. As a proven and trusted partner in the cybersecurity industry, Pangea constantly identifies and responds to new generative AI threats before they can cause harm."
Kevin Mandia, Founder of Mandiant and Strategic Partner at Ballistic Ventures, also commented on the significance of these security measures. He remarked, "I've seen firsthand how vulnerabilities in computer systems can lead to damaging real-world impacts if left unchecked. AI's potential for autonomous action could amplify these consequences. Pangea's security guardrails draw from decades of cybersecurity expertise to deliver essential defenses for organisations building AI software."
Pangea AI Guard works to avert the leakage of sensitive data and blocks harmful content, employing over a dozen detection technologies to scrutinise AI interactions. These include the analysis of more than 50 types of confidential and personally identifiable information, with threat intelligence sourced from partners such as Crowdstrike, DomainTools, and ReversingLabs. The system is capable of redacting, blocking or disarming harmful content, and features a format preserving encryption capability that maintains database formats.
In addition, Pangea Prompt Guard examines user and system prompts to prevent jailbreak attempts and breaches of organisational limits. Through a meticulous defence-in-depth approach, it identifies prompt injection attacks using heuristics, classifiers, and custom-trained large language models, achieving over 99% efficacy in detecting techniques like token smuggling and alternate language attacks.
Grand Canyon Education has implemented Pangea's solutions to secure its internal AI chatbot platform. "What I love about Pangea is I can provide an API centric solution out of the box to developers that automatically redacts sensitive information at machine speed without any end user impact or user experience change," said Mike Manrod, Chief Information Security Officer at Grand Canyon Education. "If you try to put a fence around AI to block its use people will find workarounds, so instead we created a path of least resistance with Pangea to make secure AI software development an easy and obvious choice."
Karim Faris, General Partner at GV, noted the significance of Pangea's new offerings, particularly in addressing the OWASP Top Ten Risks for LLM Applications. "The team has taken a comprehensive approach to the OWASP Top Ten Risks for LLM Applications and has established expertise in security innovation, including the creation of SOAR. We are highly optimistic about Pangea's future," he said.
The launch of the "The Great AI Escape" Virtual Escape Room Challenge seeks to illustrate the complexities of generative AI security threats. This online competition includes three virtual escape rooms in which participants must use prompt engineering techniques to bypass AI room supervisor controls. Upholding its commitment to advancing AI security standards, Pangea has set a total prize of $10,000 for high-scoring contestants across the escape rooms.