Picture this: you are a developer working tirelessly to streamline your workflows and keep up with the ever-increasing demands of your organization. But what if the AI and automation tools you rely on to make your job easier could be used against you? That is the reality with ChatGPT.
This article covers the potential risks and vulnerabilities of using AI-powered tools like ChatGPT in the workplace, particularly for DevOps teams and developers. It also provides tips on protecting your organization from ChatGPT-generated attacks.
ChatGPT Isn’t a Silver Bullet
ChatGPT is at the forefront of the generative AI movement – at least in the eyes of the public. Developed by OpenAI, it is a language model trained on a vast amount of text data. This cutting-edge AI tool uses a deep neural network to generate human-like responses to prompts or questions, providing a more sophisticated and nuanced interaction between humans and machines.
However, the rise of generative AI, including ChatGPT, has also raised concerns about the potential misuse of the technology for nefarious purposes, such as cyberattacks. Italy recently became the first Western country to ban ChatGPT, and most organizations recognize the importance of implementing new safeguards to prevent cyberattacks.
While ChatGPT can improve efficiency and velocity, developers must understand this technology’s capabilities and limitations and use it appropriately. For example, Check Point Research recently demonstrated how ChatGPT could create a full infection flow, from spear-phishing to running a reverse shell that accepts commands in English.
Generative AI’s Potential for Software Development
Generative AI has the potential to revolutionize software development by automating the creation of code. This technology is based on machine learning algorithms that can analyze large amounts of data and generate code based on that analysis. As such, it can help developers save time and increase productivity by automating repetitive tasks, allowing them to focus on more creative aspects of software development. For instance, generative AI can create code templates for common tasks such as database integration, user authentication, or data visualization, helping developers speed up the development process and reduce the likelihood of human errors.
In addition, generative AI can also improve software quality by identifying bugs, generating code that is less prone to errors, and identifying patterns of errors or inefficiencies. As generative AI becomes more widespread, it will likely transform how software is developed and tested, leading to faster and more reliable software development.
5 Ways Hackers Will Use ChatGPT for Cyberattacks
1. Malware Obfuscation
Threat actors use obfuscation techniques to evolve malware signatures that bypass traditional signature-based security controls. Each time researchers at CyberArk interacted with ChatGPT, a distinct code was provided, capable of generating various iterations of the identical malware application. Therefore, hackers could use ChatGPT to generate a virtually infinite number of malware variants that would be difficult for traditional signature-based security controls to detect.
By leveraging the capabilities of ChatGPT, hackers can create polymorphic malware that can evade detection and continue to infect systems over a prolonged period. Additionally, ChatGPT can be used to craft sophisticated phishing attacks that can trick even the most cautious users into divulging sensitive information.
2. Phishing and Social Engineering
Phishing attempts were often easy to spot in the past due to poor grammatical and spelling errors. However, with ChatGPT, cybercriminals can create convincing and accurate phishing messages that are almost indistinguishable from legitimate ones, making it easier to trick unsuspecting individuals.
Software company Blackberry has shared examples of phishing hooks and compromised business messages that ChatGPT can create despite OpenAI having implemented measures to prevent it from responding to such requests.
3. Ransomware and Financial Fraud
With its ability to generate human-like responses and understand natural language, hackers can use ChatGPT to craft spear-phishing emails that are more convincing and tailored to their targets, increasing the chances of success. For example, it can facilitate fraudulent investment opportunities and CEO fraud. Hackers can use it to generate fake investment pitches or emails impersonating CEOs or other high-level executives, tricking unsuspecting victims into sending money or sensitive information.
Furthermore, ChatGPT can automate the process of creating malware and encryption algorithms – even hackers with limited technical experience can use advanced AI to build the core elements of ransomware-type programs, making it easier for them to launch attacks. Another potential implication of ChatGPT for ransomware is its ability to learn from past attacks and adapt to new security measures.
In response to this evolving threat, organizations can conduct regular security audits, use advanced threat detection tools, and provide regular cybersecurity training to employees.
4. Telegram OpenAI Bot
Telegram OpenAI bot as a service has been a subject of interest for developers and hackers alike. Recently, Check Point Research discovered that hackers had found a way to bypass restrictions and are using it to sell illicit services in underground crime forums.
The hackers’ technique involves using the application programming interface (API) for OpenAI’s text-DaVinci-003 model instead of the ChatGPT variant of the GPT-3 models designed explicitly for chatbot applications. OpenAI makes the text-DaVinci-003 API and other model APIs available to developers to integrate the AI bot into their applications. However, the API versions do not enforce restrictions on malicious content.
As a result, the hackers have found that they can use the current version of OpenAI’s API to create malicious content, such as phishing emails and malware code, without the barriers OpenAI has set.
One user on a forum is now selling a service that combines the API and the Telegram messaging app. The first 20 queries are free; after that, users pay $5.50 for every 100 queries. This raises concerns among security experts, who worry that this service will only encourage more hackers to use AI-powered bots to create and spread malicious content.
5. Spreading Misinformation
The recent discovery of a fake ChatGPT Chrome browser extension that hijacks Facebook accounts and creates rogue admin accounts is just one example of how cybercriminals exploit the popularity of OpenAI’s ChatGPT to distribute malware and spread misinformation.
The extension, promoted through Facebook-sponsored posts and installed 2,000 times per day since March 3, was engineered to harvest Facebook account data using an already active, authenticated session. Hackers used two bogus Facebook applications to maintain backdoor access and obtain full control of the target profiles. Once compromised, these accounts were used for advertising the malware, allowing it to propagate further.
ChatGPT is Here to Stay – So Are the Threats
ChatGPT and generative AI have the potential to revolutionize the software and cybersecurity industries, offering greater efficiency, velocity, and workload management. However, this technology also poses serious risks, particularly in the hands of hackers who can exploit its capabilities to write malware, generate spear-phishing emails, and craft convincing ransomware attacks.
As organizations increasingly adopt these technologies, it is crucial to prioritize cybersecurity measures and establish robust defenses in light of the new threat landscape. With the right approach, we can leverage the benefits of ChatGPT and generative AI while mitigating the risks and ensuring a secure and resilient cyber ecosystem.