JFrog a liquid software company and creators of the JFrog Software Supply Chain Platform, has debuted JFrog ML, a MLOps solution as part of the JFrog Platform designed to enable development teams, data scientists and ML engineers to quickly develop and deploy enterprise-ready AI applications at scale.
As enterprise AI initiatives face an increasing number of security, scalability and management challenges, JFrog says it is now the only platform in the world that drives the secure delivery of machine learning technologies alongside all other application components in a single solution. JFrog ML is the first addition to the platform that resulted from QWAK.ai acquisition in 2024, the company said.
By uniting machine learning (ML) practices with traditional DevSecOps development processes, entities can help ensure their models are seamlessly deployed, secured, and maintained, which is expected to enhance model performance and dependability in real-world, production applications.
The delivery of JFrog ML is an outcropping of JFrog’s commitment to address the demand for more scalable, secure AI application delivery, including integrations with Hugging Face, AWS Sagemaker, MLflow (developed by Databricks), and NVIDIA NIM.
Alon Lev, VP & GM, MLOps, JFrog, saoys as the demand for AI-powered applications continues to grow rapidly, so do the concerns around the ability to control and manage this new domain on all fronts – from MLOps to ML security. “In fact, our own team of security researchers were the first to find and help remediate new, zero-day malicious ML models in Hugging Face. JFrog ML combines superior, straightforward and hassle-free user experience for bringing models to production, combined with the level of trust and provenance enterprises expect from JFrog, allowing customers to accelerate their AI initiatives with confidence.”
Developing ML models and making them production-ready is an extremely complex process, today demanding a blend of technical expertise and a deep understanding of software delivery. Models require careful planning and testing to ensure reliability and efficiency in a live environment. Additionally, Data Scientists building models don’t work in isolation—they need data engineers to structure and prepare data, software engineers to deploy models as microservices, and DevSecOps teams to ensure smooth and secure integration into production.
JFrog ML helps overcome these often-crippling challenges with a structured framework designed to support the entire organization and ensure that models successfully get promoted out of experimental stages, the company explains.
“Building and maintaining robust ML workflows requires a complex infrastructure, from feature engineering to model deployment and monitoring. JFrog ML is designed to enable these capabilities by utilizing JFrog Artifactory as the model registry of choice and JFrog Xray for scanning and securing ML models, making it possible to enhance user efficiency by providing a unified platform experience for DevOps, DevSecOps, and MLOps,” added Yuval Fernbach, VP & CTO, JFrog ML. “As AI evolves, organizations can leverage JFrog ML to continuously adapt their infrastructure to support everything from traditional ML models to cutting-edge GenAI applications.”
By treating ML models as software packages from the start of development and converging ML model management and software development into a single source of truth, the friction and errors between stages and teams can be significantly reduced. JFrog ML delivers AI development and deployment with full traceability, governance and security, says JFrog.
Key features include:
- A unified DevOps, DevSecOps and MLSecOps platform: JFrog ML as part of the JFrog Platform provides a holistic view of the entire software supply chain, from traditional software packages to LLMs and GenAI, streamlining AI pipelines and ensuring models are securely managed alongside other software artifacts.
- Secured ML Models: Enables AI innovation while keeping companies secure with the only platform providing off-the-shelf, enterprise-grade model security scanning of malicious or vulnerable models generated by your company – or those brought in from open source.
- A single AI system of record: Part of the JFrog Software Supply Chain Platform, JFrog ML manages ML models and datasets alongside other building blocks such as containers and Python packages, creating one place to enforce customizable security and compliance policies throughout the AI development process.
- Intuitive model serving to production: JFrog ML helps supercharge AI initiatives with simplified model development and deployment processes, helping data science and ML engineering teams accelerate model serving in production while dramatically improving security and simplifying model governance, rollback, and redeployment.
- Model training and quality monitoring: Complete dataset management and feature store support.
- Trusted ML environment: JFrog ML creates a reproducible artifact of every model built with the JFrog Platform, allowing for security scans and automated quality checks to ensure your models have been as rigorously vetted as your other software components.
- Support for NVIDIA NIM enterprise-grade AI Models: JFrog ML catalog will also include serving NIM-based models as part of its model library, allowing for one-click deployment.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.