SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Digital shield protecting data stream computer to cloud cybersecurity

Radware unveils LLM Firewall to combat generative AI threats

Wed, 19th Nov 2025

Radware has launched a new security solution aimed at protecting enterprises using generative AI applications. The product, named LLM Firewall, is designed to block security threats at the prompt level, before they reach large language models (LLMs).

Prompt-level defences

The LLM Firewall operates as an additional feature within Radware's existing cloud application protection services. By monitoring and controlling the data inputted into AI prompts, it seeks to prevent common forms of attack, such as prompt injection and jailbreak attempts, that can exploit AI models for malicious purposes. The firewall is described as model-agnostic, meaning it can be deployed across a variety of AI platforms without affecting ongoing services or development processes.

Addressing data safety

One of the central purposes of the LLM Firewall is to help safeguard personal and sensitive data when using generative AI tools. The product is set up to recognise and block attempts to extract personally identifiable information before these requests can reach or be processed by the user's LLM. This function is intended to support organisations in meeting the requirements of global data protection regulations including GDPR and HIPAA, as well as internal policies on data handling.

Mitigating regulatory risk

Generative AI adoption in business settings has frequently been slowed by concerns over regulatory compliance and the potential for sensitive information leaks. The new product is intended to directly address the leading risks and threats identified by the 2025 OWASP Top 10 Risks and Mitigations for LLMs and generative AI applications, which list exposure of personal data and prompt-based attacks as key issues for enterprises deploying such technologies.

Industry need

Cybersecurity professionals have raised increasing concerns around the integration of AI, particularly where users interact directly with language models over web interfaces and APIs. Attacks using manipulated prompts can lead AI systems to reveal confidential information or behave outside intended limits, posing challenges for enterprises handling regulated or commercially sensitive data.

Enterprise integration

The LLM Firewall has been developed to allow for straightforward integration into existing AI platforms within corporate environments. Its real-time processing aims to ensure there is minimal disruption to established business workflows while offering an additional layer of protection for enterprises investing in generative AI capabilities.

"Many organizations are rightfully cautious about adopting AI, hesitating because of concerns about complex regulations, data safety and systems integrity," said Constance Stack, Chief Growth Officer, Radware. "Radware's new LLM Firewall is built around the premise that AI security must be enforced at the prompt in order to defend against prompt injection, jailbreaks, and resource abuse. Think of it as WAF for LLMs, but instead of guarding against HTTP-level exploits, it helps mitigate against natural language exploits specific to LLM behavior, and enhances protection for LLM models and integration, in real time."
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X