Microsoft has amended recent civil litigation to name key developers of malicious tools designed to bypass AI safeguards, including those in Azure OpenAI Service.
The legal action targets four individuals—Arian Yadegarnia (Iran), Alan Krysiak (UK), Ricky Yuen (Hong Kong), and Phát Phùng Tấn (Vietnam)—who are part of a global cybercrime group, Storm-2139.
These actors exploited stolen credentials to access AI services, modify their capabilities, and resell access to malicious actors, enabling the creation of harmful content such as non-consensual intimate images.
Generating Illicit Content
Storm-2139 operates through three tiers: creators develop illicit tools, providers distribute them, and users generate violating content.
In December 2024, Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit in Virginia against ten unidentified individuals. This led to the identification of several actors, including two in the U.S. Microsoft is preparing criminal referrals to law enforcement.
Following Microsoft’s website seizure and legal action, Storm-2139 members reacted with infighting and speculation about their identities.
Some members also doxed Microsoft’s legal team, exposing their personal information online, leading to harassment attempts.
Microsoft says it remains committed to preventing AI abuse. The company has reinforced AI safeguards, published policy recommendations for law enforcement, and outlined measures to combat intimate image abuse.
While cybercriminal disruptions take time, Microsoft says its actions aim to deter future AI misuse by publicly identifying and dismantling these operations. “With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated.”
Why Not Pay Legitimate Sources?
LLMJacking refers to a situation where a threat actor abuses stolen API access to GenAI services by selling the access to third parties, explains Elad Luz, Head of Research at Oasis Security “One might wonder why these third parties don’t simply pay legitimate sources for their GenAI API access. Surprisingly, the reason is usually not related to competitive pricing.
Luz says the third parties purchasing these GenAI services from bad actors often violate the terms of service, engaging in activities such as “AI Girlfriend” chats (erotic conversations), generating pornographic images, or producing harmful content. “This kind of content would typically be prohibited or would raise concerns with the service provider regarding the legitimate use of the organization registered for the API.”
When threat actors use stolen API access, Luz adds, their activities often go unnoticed because they represent a small fraction of the overall API usage, especially in comparison to the legitimate, high-volume use from the registered entities. These groups are essentially “a drop in the sea”—their usage is insignificant in the grand scheme of things. “Legitimate organizations are also making use of the API, so the overall activity from legitimate sources can mask the suspicious or illegal usage.”
Adjust Safety Settings
Additionally, legitimate businesses registering for the service can adjust safety settings and filters to lift certain restrictions on the LLM (such as those related to harassment, hate speech, or explicit content). “This is true for the Microsoft OpenAI API as well, where bypassing these safeguards requires submitting a form and undergoing a review of both the organization and its intended use (for more details, see here: link). This additional verification step makes it harder for threat actors to create accounts with the sole purpose of abusing the system,” says Luz
As a result, certain access keys become more prestigious, particularly those tied to organizations that have adjusted their filters to be more permissive. Non-legitimate groups would be willing to pay a premium for this type of permissive access.
In an era where AI safety is a high priority, Microsoft is taking action. They have traced down and are prosecuting threat actors who are abusing stolen LLM access.
Given this growing threat, Luz says it is crucial for businesses to invest in robust non-human identity security solutions. “Organizations must proactively secure service accounts, service principals, API keys, and other non-human identities that could serve as entry points for these types of attacks. As AI continues to play a larger role in our systems, ensuring the integrity and security of these non-human identities is essential to mitigating the risks posed by increasingly sophisticated threat actors.”
Inventive Ways to Exploit AI
Patrick Tiquet, Vice President, Security & Architecture at Keeper Security, says as AI solutions become increasingly integrated into business operations, bad actors are finding inventive new ways to exploit them.
“LLMJacking essentially hijacks a victim’s large language model (LLM) using stolen credentials and is a stark reminder that AI services are only as secure as the credentials and access controls protecting them. Storm-2139’s exploitation of exposed API keys to hijack GenAI services underscores the need for robust credential hygiene and continuous monitoring. Attackers not only resold unauthorized access but actively manipulated AI models to generate harmful content, bypassing built-in safety mechanisms.”
Tiquet says entities must recognize that generative AI platforms are valuable targets for malefactors, and security teams must enforce least-privilege access, implement strong authentication, and securely store API keys in a digital vault to prevent misuse.
“Regularly rotating credentials and monitoring AI-related activity for anomalies are critical defense measures, while automated threat detection can help identify unauthorized access before it escalates,” he adds.
Information Security Buzz News Editor
Kirsten Doyle has been in the technology journalism and editing space for nearly 24 years, during which time she has developed a great love for all aspects of technology, as well as words themselves. Her experience spans B2B tech, with a lot of focus on cybersecurity, cloud, enterprise, digital transformation, and data centre. Her specialties are in news, thought leadership, features, white papers, and PR writing, and she is an experienced editor for both print and online publications.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.