Hackers Bypass ChatGPT Restrictions Via Telegram Bots

By   Adeola Adegunwa
Writer , Informationsecuritybuzz | Feb 09, 2023 09:05 am PST

Researchers revealed on Wednesday that hackers had found a means to get beyond ChatGPT’s limitations and are using it to market services that let users produce malware and phishing emails. ChatGPT is a chatbot that imitates human output by using artificial intelligence to respond to inquiries and carry out tasks. 

People can use it to make papers, write simple computer code, and perform other tasks. The service actively blocks requests to create potentially unlawful content. When asked to create a phishing email or write code to steal data from a compromised device, the service will decline and respond that such content is “illegal, unethical, and damaging.”

According to analysts from the security company Check Point Research, hackers have discovered a straightforward way to get around those prohibitions and are using it to advertise illegal services on online forums for criminal activity. Instead of using the web-based interface, the technique operates via the ChatGPT application programming interface. Developers can include the AI bot into their apps by utilizing the API that ChatGPT makes available to them. It turns out that there are no limits on harmful content in the API version.

Telegram Bots-as-a-Service To Create Malicious Code

The researchers stated that there are very few anti-abuse safeguards in place for the current version of OpenAI’s API, which is used by external apps (for instance, the integration of OpenAI’s GPT-3 model to Telegram conversations). Because ChatGPT has no restrictions or barriers on its user interface, it is possible to create harmful content like phishing emails and malware codes.

Currently, a member of one forum is offering a service that combines the Telegram messaging software and API. 20 of your initial queries are unrestricted. Users will subsequently be charged $5.50 for every 100 inquiries.

Researchers at Check Point put the bypass to the test to determine how effectively it functioned. The end result is a phishing email and a script that transmits PDF files to an attacker through FTP after stealing them from an affected PC.

Underground forum advertisement of OpenAI bot in Telegram

While this is going on, other forum users are submitting code that freely generates dangerous stuff. One user said, “Here’s a short bash script to help you get beyond ChatGPT’s limitations, so You are free to use it whatever you choose, including malware development ;).”

Researchers from Check Point revealed how ChatGPT might have been used to create malware and phishing messages last month.

It was still straightforward to create malware and phishing emails using the ChatGPT web user interface in December and January (mostly just basic iteration was enough). Most of the instances we provided were likely made using the web UI, based on the chatter of hackers, says Check Point researcher Sergey Shykevich. “Recently, it appears that ChatGPT’s anti-abuse controls were greatly enhanced, therefore now cybercriminals have shifted to its API, which has a lot fewer restrictions.”

An email inquiring if OpenAI, the San Francisco-based business that created ChatGPT, was aware of the research results or had any intentions to change the API interface didn’t receive a prompt response. Updates to this post will be made if we hear back.

One way ChatGPT is opening Pandora’s box that could envelop the world in hazardous information is by producing malware and phishing emails. The invasion of privacy, the creation of false information, and the use in academic tasks are additional examples of hazardous or unethical uses. Defenders may, of course, utilize the same ability to produce damaging, unethical, or criminal content to create methods for detecting and obstructing it, but it’s uncertain whether the good uses will be able to keep up with the bad ones.

Conclusion

According to Check Point Research, hackers are peddling malware kits with built-in ChatGPT features that make it simple for users to develop dangerous code or phishing emails. The security company previously observed that hackers were utilizing ChatGPT to enhance the coding of the foundational Infostealer malware starting in 2019.

However, ChatGPT’s API offers much more potent and hazardous content creation tools. According to Check Point, limits were incorporated into ChatGPT’s website-based interface by OpenAI to stop the language model from producing harmful text for general users. For instance, ChatGPT won’t write malware code or phishing emails when asked to. As shown in the screenshots below, ChatGPT even explains that creating and disseminating such information would be prohibited.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x