SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

JFrog exposes vulnerabilities in machine learning platforms

Wed, 21st Aug 2024

JFrog has unveiled what it describes as the first-ever identified vulnerabilities within machine learning models, calling attention to the security risks associated with MLOps platforms. The findings are part of a newly published research paper entitled "From MLOps to MLOops: Exposing the Attack Surface of Machine Learning Platforms". Shachar Menashe, Senior Director of Security at JFrog and leading researcher, outlined the inherent and implementable vulnerabilities that his team has exposed across multiple platforms, describing some of them as likely to score very high on critical severity indices.

In the report, JFrog's team highlighted a series of vulnerabilities, including how several commonly used MLOps programs feature "Remote Code Execution" (RCE) capabilities without adequate authentication and role establishment, thus compromising security. Menashe noted, "Our research has uncovered weaknesses in MLOps implementations which, if exploited, can lead to serious security breaches including irreversible malicious code execution."

Furthermore, the research shows that even seemingly secure formats, such as the popular MLFlow Recipe, can be vectors for hidden malicious code. This discovery points out that these formats can exploit environments through concealed code execution, presenting substantial risks to developers who might trust these frameworks indiscriminately. Menashe explained, "It's frightening how some of these formats appear benign on the surface but are potential Trojan horses, embedding malicious code that can disrupt entire systems."

An additional finding reveals that automatic defaults within these platforms often encourage systems to trust optional remote code, which might include ransomware. Inexperienced developers may not recognise this risk promptly, thus endangering their whole operation. Menashe compared the current state of MLOps security to that of Python in its early days, suggesting that it is still largely in an exploratory phase and fraught with vulnerabilities that require detailed analysis and oversight to uncover. "MLOps security is very much in its infancy, similar to how Python was when it first began to gain traction; there's still a lot we're yet to learn about these vulnerabilities," Menashe added.

Menashe argued that while artificial intelligence and machine learning capabilities embed significant advantages into modern software solutions, they also call for heightened awareness and improved security measures. He cautioned that many developers, data scientists, and security professionals are navigating uncharted waters with AI and ML, often unaware of the full extent of potential vulnerabilities. "Most of the community involved in developing AI and ML solutions 'don't know what they don't know' about the security threats inherent within these technologies," Menashe remarked.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X