At the moment hackers use automated software to carry out large-scale attacks. As the artificial intelligence industry is involved in creating next-generation machines it would not be long until AI is used by hackers to deploy ransomware to targets worldwide.
AI Development Is Considered a Priority by Governments and the Industry
Artificial intelligence has become a highly competitive industry that is expanding rapidly thanks to the investments made by the high-tech corporations and state governments worldwide. The prospects of using the state of the art agents in various fields has benefits both to the financial expenditures and the tasks performed by humans. It is true that while the majority of the individual AI operatives are going to be used in automating rudimentary tasks it is possible to design an intelligent system that has the ability to take critical decisions that can have a serious impact on the world.
Hackers Can Take Advantage of AI Development
It would not take long for hackers and criminal collectives to be able to use the technology as well. Security experts speculate that once AI development reaches consumer adoption, it would not be hard to use the agents for malicious tasks as well. At the moment the criminals are using automated software that are usually modular frameworks or scripts that are modified and instructed to target a specific set of predefined targets. To achieve the highest infection ratio the criminals need to define parameters such as the attack type and the end goals.
By using an artificial intelligence agent this task will be automated by them. There are several processes that can be offloaded from the human operators:
- Selection ‒ Automated intelligent machines can be used to evaluate the most likely targets that can be compromised. To this date hackers had to manually go through networks using scripts and software.
- Infiltration ‒ The technical aspects of a criminal intrusion into a designated computer or network target can be automated by AI. The agents can use machine learning technology and other related methods to effectively go through the security features implemented by the administrators.
- Evasion ‒ Using advanced techniques the malicious AI can hide the infection by manipulating the system and disabling active security components.
- Sabotage ‒ Once the systems have been infected by the agents they can be used to deploy computer viruses of all kinds, including advanced form of ransomware.
At the moment most criminal organizations utilize well-known ransomare families and modify their source code to produce various ransomware samples. Recently the Dharma ransomware with .arena extension has caused many infections as a result of a massive email spam campaign. In a similar fashion AI will be able to craft their own custom malware, potentially from scratch by implementing advanced machine learning algorithms.
Ransomware Deployment by AI Possible
Ransomware constitute one of the most alarming threats to computer security in general. In ENISA’s report for 2016 they are stated to have the biggest growth in all tracked characteristics: number of attack campaigns, number of victims, average ransom paid, advanced infection methods used, damage and criminal turnover. In the last few years the majority of security incidents seem to originate namely from advanced ransomware samples.
Prospective malicious AI can be used to coordinate hacker attacks of an unmeasurable scale as it can use the resources of large botnets in an automated way. Security experts speculate that probably the bigger danger would be the creation of new samples by the artificial intelligence itself. By design they can analyze the weak spots in human-created viruses and generate advanced forms of ransomware that can severely impact the intended targets.
Unfortunately it would not be difficult for a reasonably advanced AI system to acquire the required information. Cybersecurity as one of the most dynamic fields in IT is dependent on collaboration and cooperation between experts worldwide. As a result of that a large part of the research is public and it is relatively easy to obtain detailed information on how to infiltrate whole computer networks.
How To Prevent Potential AI Ransomware Abuse
To prevent such scenarios from happening computer scientists, government institutions and the industry as a whole must come up with a way to disallow malicious use of artificial intelligence technologies. Fortunately development has not reached this stage of maturity and large-scale attacks are still within the realms of science fiction. However that is likely to to change in the coming years.
One of the possible ways of preventing AI security abuse is by implementing built-in protocols that are rooted into the “consciousness” of the agents to not cause harm to other systems. The exact definitions can be given in a standard issued by an organization or group such as the IEEE similar to the way the Internet technologies are governed. In practice every developer with the required skills and source code can build an AI agent and make it completely operational according to their own needs. Such a scenario prescribes a situation that is similar to the present data ‒ both security specialists and criminals have access to the same technology. AI is already utlized by big companies like Facebook in combating criminals.
At the moment it is impossible to tell how artificial intelligence will develop and if it will be able to be used by criminals with a malicious intent. In all cases those that ride this new wave will, by definition, have an advantage over the other party. A positive scenario will be that deployment of intelligent agents in cybersecurity applications and services provide an effective and adequate protection against incoming ransomware and other related threats.
[su_box title=”About Martin Beltov” style=”noise” box_color=”#336588″][short_info id=’103416′ desc=”true” all=”false”][/su_box]
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.