Microsoft has initiated legal proceedings against a group it accuses of deliberately creating and deploying tools designed to bypass security measures in its cloud-based AI products.
In a lawsuit filed in December with the U.S. District Court for the Eastern District of Virginia, Microsoft alleges that a group of ten anonymous individuals developed software to exploit stolen credentials and gain unauthorized access to the Azure OpenAI Service. This fully managed platform incorporates technology from OpenAI, the company behind ChatGPT.
The legal complaint refers to the defendants using the pseudonym “Does” and accuses them of violating several federal laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering statutes. Microsoft claims the group used its software and servers without authorization to generate “offensive” and “illicit” content, though it has not provided specific examples of this material.
The tech giant is seeking an injunction, financial damages, and other legal remedies.
The complaint states that Microsoft uncovered the unauthorized activity in July 2024, discovering that credentials—specifically API keys—issued to customers of the Azure OpenAI Service were being exploited to create content that violated the platform’s acceptable use policy. An internal investigation revealed that these API keys had been stolen from paying customers.
“While the exact method by which the defendants obtained the API keys remains unclear, it appears they engaged in a systematic scheme to steal credentials from multiple Microsoft customers,” Microsoft wrote in its filing.
According to Microsoft, the defendants used these stolen API keys to operate a “hacking-as-a-service” platform. Central to this operation was a tool called “de3u,” which facilitated unauthorized access to the Azure OpenAI Service, enabling users to generate images via OpenAI’s DALL-E model without programming knowledge. Additionally, the software allegedly attempted to bypass Microsoft’s content filtering system by manipulating prompts.
The de3u project code was previously hosted on GitHub, a subsidiary of Microsoft, but is no longer available.
“These actions allowed the defendants to reverse-engineer methods to circumvent Microsoft’s content moderation systems,” Microsoft claims. “They intentionally accessed protected systems without authorization, causing significant damage and loss as a result.”
In a recent blog post, Microsoft announced that the court had authorized the seizure of a website allegedly critical to the defendants’ operations. This measure is intended to aid the company in collecting evidence, understanding the monetization of these illicit services, and dismantling related technical infrastructure.
Microsoft also noted that it has implemented undisclosed countermeasures and enhanced security protocols within the Azure OpenAI Service to address the activity it identified.