SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Cybersecurity analyst ai red teaming prompt injection shield branching

Novee launches AI red teaming tool for LLM app risks

Thu, 26th Mar 2026

Novee has launched an autonomous AI red teaming product for large language model applications, aimed at finding security weaknesses in AI-driven software.

The product sits within the company's AI penetration testing platform and is designed to test applications that rely on large language models, including chatbots, copilots, autonomous agents and workflow tools. It targets risks that have emerged as businesses deploy more AI software across customer service, internal operations and development environments.

Those risks include prompt injection, jailbreak attempts, data exfiltration and efforts to manipulate the behaviour of software agents. Traditional penetration testing products and static security scanners were not built to detect many of these issues, as they were designed primarily for web applications and infrastructure.

Novee says its testing agent autonomously simulates attack scenarios and combines techniques in ways that resemble real-world attacks. Security teams can point the system at an AI-enabled application to see how it responds under adversarial conditions. The platform then generates a vulnerability assessment and remediation guidance.

New attack surface

The launch reflects a wider shift in cyber security as companies try to secure software that uses generative AI models. Unlike conventional applications, LLM-based tools can be influenced by natural language prompts, hidden instructions, manipulated context windows and interactions between multiple agents. That creates routes for misuse that differ from older software flaws.

The agent is intended to work across applications built on different model providers and architectures, including OpenAI, Anthropic and open-source systems. It can also connect to existing security testing processes and CI/CD pipelines, allowing teams to run tests as part of software development and release cycles.

Ido Geffen, Chief Executive Officer and Co-Founder of Novee, linked the launch to the speed at which attackers now move.

"I've spent twenty years on the offensive side of cyber, inside government operations, protecting critical infrastructure, and now building AI systems that think like real attackers," Geffen said.

"What we see consistently is that attackers compress timelines dramatically. The window between vulnerability and exploitation can shrink to minutes. Defending against that requires continuous testing, not periodic assessments."

Research input

Novee says its internal research team played a central role in building the product, using methods drawn from investigations into serious AI-related vulnerabilities. That work is fed back into the testing agent so it can adapt to techniques used to discover and exploit weaknesses in AI systems.

One example the company cited involved a disclosed vulnerability affecting Cursor, the AI coding tool. According to Novee, the flaw allowed attackers to influence the context window of a coding agent and gain full remote code execution on a developer workstation. It added that other findings are under responsible disclosure with other vendors.

That points to a growing concern for software makers and enterprise buyers alike: AI tools are now being embedded into coding assistants, business workflows and customer interfaces, often with access to sensitive data or system permissions. As a result, flaws in prompt handling or agent logic can create knock-on effects beyond the application itself.

Gon Chalamish, Co-Founder and CPO of Novee, said existing security practices have not kept pace with that shift.

"AI applications introduce an entirely new attack surface, but most organizations are still testing them with tools designed for web applications and infrastructure," Chalamish said.

"Attackers are already adapting their techniques for AI systems. Security teams need a way to test those systems the same way adversaries attack them."

Company backdrop

Novee was founded by Ido Geffen, Gon Chalamish and Omer Ninburg, whom the company describes as leaders in offensive security. It says it has raised USD $51.5 million within four months of its inception from investors including YL Ventures, Canaan Partners and Zeev Ventures.

The AI red teaming product is currently available in beta. It is designed to provide ongoing testing rather than periodic assessments, reflecting the shorter time frame between the discovery of a flaw and attempted exploitation that security researchers say now characterises attacks on AI systems.