Software supply chain company JFrog revealed on Monday that it had discovered 22 software vulnerabilities across 15 machine learning-related open-source software projects. The results, presented in JFrog’s latest ML Bug Bonanza blog, shed light on the security challenges organizations face as they accelerate AI and ML adoption and highlight the need for more robust protections.
The blog post showcases the ten most severe server-side vulnerabilities and the techniques attackers are using to exploit them. According to the blog, those vulnerabilities would allow attackers to:
- Hijack ML models remotely
- Elevate ZenML Cloud privileges without authorization
- Infect Model ML clients
- Hijack ML database frameworks remotely
- Conduct prompt injection code execution on the Vanna.AI platform
- Exfiltrate and manipulate databases
- Hijack ML pipelines remotely
“These vulnerabilities allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines,” JFrog researchers said. “Exploitation of some of these vulnerabilities can have a big impact on the organization — especially given the inherent post-exploitation vectors present in ML such as backdooring models to be consumed by multiple clients.”
According to JFrog, the disconnect between ML development and traditional application security (AppSec) practices has contributed to these vulnerabilities. When ML developers fail to consider established AppSec practices, organizations lack the oversight necessary to eradicate vulnerabilities before ML models go live.
Another of JFrog’s recent studies supports this claim, suggesting that although organizations are aware of the security issues associated with AI models, they lack the ability to fix them. 57% of organizations say that the lack of integration between AI/ML security and existing security programs leaves potential blind spots. As such, only 39% feel confident in their ability to secure AI/ML models.
Organizations are particularly concerned about data exposure from large language models (58%), malicious code embedded within AI models (49%), and AI bias affecting decision-making processes (41%) due to inadequate ML security.
These findings highlight the need to align ML development with traditional AppSec practices. While AI and ML can offer enormous benefits for organizations, it’s crucial not to prioritize rapid development over security. Doing so could compromise ML models and put organizations at risk.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.