A glitch in the open-source software of the widely-used language model, OpenAI’s ChatGPT payment, has led to a significant data leak. As per OpenAI’s confirmation, the bug resulted in ChatGPT payment inadvertently exposing its paid users’ payment details along with random users’ conversation histories.
On March 20th, when users attempted to subscribe to the paid service ChatGPT Plus, they encountered a problem wherein email addresses of unrelated users surfaced in the payment form. However, it has since been discovered that the ChatGPT data leak was larger and compromised more data from its premium subscribers.
What Really Happened?
As per the company’s statement, a bug in the open source library, redis-py, resulted in a caching problem that potentially allowed certain active users to view another user’s last four digits and expiration date of their credit card, in addition to their name, email address, and payment address. Moreover, there was a possibility of users seeing portions of other users’ chat histories.
Notably, caching issues causing users to view each other’s data isn’t a new phenomenon. For instance, on Christmas Day in 2015, Steam users encountered pages displaying other users’ account information. It’s ironic that OpenAI, which invests significant efforts into exploring the security and safety implications of its AI, was impacted by a widely recognized security flaw.
The company estimates that the payment information leak may have affected approximately 1.2 percent of ChatGPT Plus users who utilized the service on March 20 between 4 AM and 1 PM EST.
What Was Exposed?
During the affected period, the bug sent some subscription confirmation emails to unintended users. The company confirmed that prior to the service disruption on Monday, certain active users may have viewed another user’s first and last name, email address, and payment address. It was highlighted that only the last four digits of their credit card number and the expiration date were disclosed.
Apologizing for the breach, the company announced that the bug had been fixed, and the chat service and chat history could be restored. The company revealed that the same bug could have caused the exposure of payment-related information for 1.2 percent of the ChatGPT Plus subscribers, who were active during a specific nine-hour window. This premium service offers GPT-4-grade responses.
The company believes that the number of users whose data was revealed to someone else is minimal. It has notified the affected users about the potential exposure of their payment information. It stated that users’ data is not at ongoing risk, and there is no need for further concern.
Incident Impacted Users Actively Using The App At The Time
OpenAI, the artificial intelligence research lab, recently reported that it accidentally exposed some users’ payment information and chat history. The company explained that there were two scenarios that could have led to this unauthorized exposure of user data. Initially, it is possible for a ChatGPT Plus user to come across the payment information of another user who is currently using the service.
Secondly, some subscription confirmation emails were mistakenly sent to the wrong person, including the last four digits of a user’s credit card number. OpenAI said that these scenarios could have happened before March 20th but they don’t have confirmation that they did. The company has since contacted users who may have been affected.
The reason behind this exposure was caching. OpenAI used Redis software for caching user information, and a canceled Redis request caused corrupted data to be returned for a different request.
In usual circumstances, the app would identify the data as wrong and throw an error, but if the other person was asking for the same type of data, the app would show it to them instead. This is why users were seeing other users’ payment information and chat history.
On March 20th, OpenAI made a change to its server that inadvertently caused an increase in canceled Redis requests, leading to a spike in unauthorized data exposure. OpenAI stated that the bug, which appeared in a specific version of Redis, has been fixed.
The company has also made changes to its software and practices to prevent similar incidents from happening again. These changes include adding “redundant checks” to ensure the data being served belongs to the user requesting it and reducing the likelihood of errors occurring under high loads.
It is important to note that open-source software, while essential for the modern web, can pose its own challenges. Bugs in open-source software can affect a wide range of services and companies, and malicious actors can potentially target specific software to introduce an exploit. Therefore, it is crucial to have proper checks and safeguards in place to prevent such incidents from happening and to be prepared for them if they do.
Open-Source Library Bug Behind Data Leak
OpenAI recently published a post-mortem report disclosing that a Redis client open-source library flaw led to a data breach in its ChatGPT service. This resulted in the disclosure of chat queries and personal information of approximately 1.2% of ChatGPT Plus users.
OpenAI stated in a post-mortem report, “Following our detection of a bug in the redis-py open-source library used by our Redis client, we promptly contacted the Redis maintainers and provided them with a patch to address the issue.”
The company immediately contacted Redis maintainers and provided them with a patch to fix the bug. Upon investigation, OpenAI found that the same bug also caused the unintentional exposure of payment-related data of active ChatGPT Plus subscribers during a nine-hour window.
According to the post-mortem report, “Upon conducting a more extensive examination, we uncovered that the identical bug might have led to the inadvertent exposure of payment-related details for 1.2% of ChatGPT Plus subscribers who were using the service during a specific nine-hour timeframe.”
The compromised information comprises users’ names, email addresses, payment addresses, and the last four digits of their credit card numbers and expiration dates. OpenAI CEO Sam Altman has apologized for the leak and confirmed that a fix had been released and validated. He expressed his regret for the incident and the impact it had on users.
OpenAI has acknowledged that earlier this week, ChatGPT briefly let the conversation records of random users go. On Wednesday, Sam Altman, the CEO of OpenAI, tweeted, “We feel bad about this. Your previous discussions with ChatGPT are archived and shown to you as a running log of all your text inputs into the application. Several users(Opens in a new window) on Monday morning discovered that the chat history feature was displaying strange old chats that appeared to be from other users. On the same day, ChatGPT payment also experienced an outage.
OpenAI first kept quiet about the issue, but on Wednesday, Altman finally made the data vulnerability official. He stated in a tweet that “a small number of users were able to access the titles of other users’ chat history.” Altman attributed the privacy incident to a software issue in an undisclosed “open source library,” despite the fact that some people thought it was the result of a hack. The patch that OpenAI released has been verified, which is wonderful news. However, it appears that the business might have misplaced consumers’ conversation history for Monday, March 20.