CEO Reacted On Europol Reveals That Criminals Are Using Ai For Malicious Purposes, And Not Just For Deep Fakes

By   ISBuzz Team
Writer , Information Security Buzz | Nov 23, 2020 05:40 am PST

Cybercriminals will use AI in multiple ways: as a weakness, since it can increase the potential attack surface, while other forms of AI, such as deep fakes, are being weaponised to attack.

A new report from Europol warns that new screening technology will be needed to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

More on the story here:

Notify of
1 Expert Comment
Oldest Most Voted
Inline Feedbacks
View all comments
Ilia Kolochenko
Ilia Kolochenko , Founder and CEO
November 23, 2020 1:43 pm

Cybercriminals have been leveraging Machine Learning (ML) and Artificial Intelligence (AI) for years already. Thanks to the growing abundance of different Machine Learning frameworks and data processing available at a very affordable price, Machine Learning has become omnipresent and easily accessible even to small cyber gangs.

At ImmuniWeb, we have started to see proposals on the Dark Web related to implementation and maintenance of Machine Learning models for a wide spectrum of criminal purposes, spanning from improving phishing campaigns and identity theft to smart WAF bypass and exploitation of web-based vulnerabilities undetectable by automated scanners.

Cybercriminals will likely outstrip cybersecurity companies in practical usage of ML/AI in the near future. Most of the outcomes will, however, unlikely bring substantial changes or novel major risks given that ML/AI is narrowly applied to accelerate, amplify and enhance existing attack vectors and techniques.

Last edited 3 years ago by Ilia Kolochenko

Recent Posts

Would love your thoughts, please comment.x