SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

Backslash Security unveils security risks in AI-generated code

Wed, 1st May 2024

Backslash Security, an innovator in application security, has recently unveiled the significant security implications of AI-generated code through its GPT-4 developer simulation exercise. The initiative, constructed and implemented by the Backslash Research Team, employed LLM-generated code to uncover potential security blind spots.

Gartner's data shows that 63% of organisations are either in the pilot stage or have begun implementing AI code assistants. The appeal of AI-generated code resides in its simplicity and potential to quicken the rate of new code development. Nevertheless, this innovation doesn't come without its security risks and potential to expose vulnerabilities.

In probing the security pitfalls linked with AI-generated code, the Backslash Research Team devised a sequence of tests using GPT-4. These exercises identified key security blind spots specific to AI-generated code's reliance upon third-party open-source software (OSS). It was discovered that some Learning Language Models (LLMs), due to their training on static datasets, offered outdated OSS package recommendations. Given that these models are frozen at a specific time, they fail to incorporate dynamic patch updates. Consequently, the OSS package recommendations they produce could involve older code with inherent security vulnerabilities since rectified in newer versions.

Another concerning issue is the 'phantom packages' that can be included in LLM-generated code. These indirect OSS packages, which developers are often unaware of, can introduce security risks into code production through outdated, vulnerable packages. Furthermore, the variations in the GPT-4's recommendations, sometimes suggesting vulnerable package versions, can mislead developers to perceive AI-generated code as a fail-safe, resulting in serious security risks.

As review of this issue becomes increasingly urgent with the growing use of AI in producing code, Backslash Security has risen to the challenge. They have designed their platform to provide an efficient solution by offering core capabilities to address AI-generated code security issues with OSS. These include an in-depth reachability analysis, which allows AppSec and product security teams to identify and prioritise realistic threats and the ability to spot and assess the risk level of 'phantom packages.'

Shahar Man, Co-Founder and CEO of Backslash Security stresses the importance of adapting security measures in response to evolving code creation methods. He acknowledges that while AI-generated code offers many possibilities, it introduces new security challenges at a larger scale. Shahar explains, "Our research shows that securing open source code is more critical than ever before due to product security issues being introduced by AI-generated code that is associated with OSS."

Backslash Security's research illuminates the security implications of AI-generated code, particularly its reliance on open-source software and the risks of outdated or phantom packages. As organisations increasingly embrace AI in code development, addressing these security challenges becomes paramount. Backslash Security's platform offers essential capabilities to mitigate these risks, emphasising the importance of adapting security measures to safeguard against evolving threats in application security.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X