A concerning security flaw has been identified in OpenAI’s ChatGPT API, allowing malicious actors to execute Reflective Distributed Denial of Service (DDoS) attacks on arbitrary websites. This vulnerability, rated with a high severity CVSS score of 8.6, stems from improper handling of HTTP POST requests to the endpoint https://chatgpt.com/backend-api/attributions.
A Reflection Denial of Service attack leverages a potentially legitimate third-party component to redirect attack traffic toward a targeted victim.
The API lets users submit a list of hyperlinks via the urls parameter. However, due to poor input validation, the API fails to check for duplicate hyperlinks or enforce a limit on the number of links submitted. This, unfortunately, enables antagonists to transmit thousands of hyperlinks in a single request.
Upon receiving the request, OpenAI’s servers—hosted on Microsoft Azure—initiate one HTTP request for each hyperlink, resuling in a massive number of simultaneous requests to the specified target website, overwhelming its resources and potentially causing downtime.
The ChatGPT crawler, operating across multiple Azure IP ranges, makes the issue worse by not limiting duplicate requests or the number of connections to the same domain.
Amplification Potential
This flaw provides a substantial amplification factor for DDoS attacks, as malefactors can exploit OpenAI’s infrastructure to target third-party websites. The defect impacts availability but does not affect the confidentiality or integrity of data.
A proof-of-concept script highlights the vulnerability by sending 50 HTTP requests from OpenAI servers to a test domain. Log files from the target server revealed simultaneous connection attempts from various Azure-based IP addresses, illustrating the potential damage this flaw could cause in real-world scenarios.
Disclosure Challenges
The vulnerability was discovered in early January 2025 and disclosed responsibly to OpenAI and Microsoft. However, multiple attempts to report the issue through official channels, including BugCrowd, email, and security contact forms, have yielded no meaningful response.
OpenAI’s security team and Microsoft’s Azure operations team have not acknowledged the defect, and no mitigation steps have been announced as of 10 January 2025.
A “Staggering” Potential for Financial Harm
“ChatGPT crawlers initiated via chatbots pose significant risks to businesses, including damage to reputation, data exploitation, and resource depletion through attacks such as DDoS and Denial-of-Wallet,” comments Elad Schulman, CEO and founder of GenAI security company Lasso.
Bad actors targeting GenAI chatbots can exploit chatbots to drain a victim’s financial resources, particularly in the absence of necessary protections, he adds. By leveraging these techniques, malicious actors can easily spend a monthly budget on an LLM-based chatbot in only one day.
“For B2C enterprises, the potential financial harm at scale is staggering. With the rapid adoption and exponential growth of chatbot usage, it is crucial for companies to strengthen their cybersecurity measures when it comes to GenAI and implement the necessary guardrails to mitigate these risks and other emerging threats effectively,” Schulman ends.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.