New Rise In ChatGPT Scams Reported By Fraudsters

By   Adeola Adegunwa
Writer , Informationsecuritybuzz | Mar 09, 2023 01:19 pm PST

Since the release of ChatGPT, the cybersecurity company Darktrace has issued a warning, claiming that a rise in criminals utilizing artificial intelligence to craft more intricate schemes to defraud employees and hack into organizations has been observed.

The Cambridge-based corporation said that AI further enabled “hacktivist” cyberattacks employing ransomware to extract money from businesses. The company recorded a 92% decline in operating earnings for the half-year ending in December.

The business claimed that after the release of the wildly successful Microsoft-backed AI tool ChatGPT in November of last year, it had observed the emergence of more convincing and sophisticated hacker frauds.

Darktrace reported that while the number of email attacks among its clientele has remained constant since ChatGPT’s launch, those relying on tricking victims into clicking malicious links have decreased. In contrast, linguistic complexity, including text volume, punctuation, and sentence length, has increased.

This suggests that fraudsters shift their attention to creating trickier social engineering schemes that exploit user trust. Darktrace claimed that the phenomena had just changed the strategies of the current cohort and had not yet led to the emergence of a new wave of cybercriminals.

Fraudsters Adopting ChatGPT Scams

An anti-virus software firm is warning that cybercriminals are employing the artificial intelligence chatbot ChatGPT to swiftly produce emails or social media postings to trick the public into falling for fraud.

After Microsoft’s January billion-dollar investment in ChatGPT’s parent firm OpenAI, ChatGPT is slated to be expanded across all Microsoft applications, including Word, Powerpoint, and Outlook.

The chatbot has generated much interest for its capacity to let users submit tales, poetry, and inquiries, but it also appears to have a sinister side.

While Kevin Roundy, senior technical director at Norton, was intrigued by the possibilities of chatbots like ChatGPT, he was equally concerned about the potential for abuse by hackers. We’ve shown that ChatGPT may be used to rapidly and easily build convincing threats. We know that hackers quickly adapt to the newest technology.

These dangers included the ability for cybercriminals to swiftly produce a convincing email or social media phishing lures, making it difficult for individuals to distinguish between authentic and fake content.

Moreover, ChatGPT had the ability to produce code. According to Roundy, while the chatbot made life easier for developers by allowing them to write and translate source code, it might also make life simpler for hackers by facilitating the speedier creation and more challenging detection of scams.

With ChatGPT, cybercriminals can also build phony chatbots that imitate people or trustworthy organizations like banks or governments in order to trick victims into providing their personal information so they can access sensitive data, steal money, or commit fraud.

Tips To Avoid Phishing Scams Related To ChatGPT.

Keep in mind that ChatGPT does not use any phishing scams because it is an AI language model. Unfortunately, fraudsters are trying to deceive consumers into disclosing personal information or clicking on dangerous links by utilizing the name ChatGPT. Here are some safety suggestions:

  • Be wary of unsolicited emails or texts requesting personal information, such as passwords or bank account numbers, under the guise of being from ChatGPT. It will never request this information.
  • To ensure they are valid, thoroughly verify the email or website address. Fraudsters may develop fake websites or email addresses that resemble ChatGPT’s official ones.
  • Don’t open attachments or click links in shady emails or communications. These might have viruses or malware in them.
  • To further protect your device from different malware and viruses, use anti-virus software and keep it updated.
  • Do not reply to or click any links in phishing emails or messages if you suspect you have received one. Please report it to the proper authorities, such as the IT department at your workplace or the Anti-Phishing Working Group.

Conclusion

Since the release of ChatGPT, the cybersecurity company Darktrace has issued a warning, claiming that a rise in criminals utilizing artificial intelligence to craft more intricate schemes to defraud employees and hack into organizations has been observed. The Cambridge-based corporation said that AI was further enabling “hacktivist” cyberattacks employing ransomware to extract money from businesses. The company recorded a 92% decline in operating earnings for the half-year ending in December.

The business claimed that after the release of the wildly successful Microsoft-backed AI tool ChatGPT in November of last year, it had observed the emergence of more convincing and sophisticated hacker frauds. Darktrace reported that the number of email attacks among its own clientele has remained constant since ChatGPT’s launch.

Those that rely on tricking victims into clicking malicious links have decreased while linguistic complexity, including text volume, punctuation, and sentence length, among others, have increased. This suggests that fraudsters shift their attention to creating trickier social engineering schemes that exploit user trust. Darktrace claimed that the phenomena had just changed the strategies of the current cohort and had not yet led to the emergence of a new wave of cybercriminals.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x