As we hit the two-year anniversary of the release of ChatGPT, we see that businesses across all sectors have started adopting generative AI tools to create content of all kinds. But many are discovering that these tools have capabilities that go far beyond writing blog posts or creating stunning images.
These tools can think, ideate, and offer advice and recommendations for a wide range of business issues based on analysis of massive amounts of data.
The opportunities are limitless and have applications in the world of cybersecurity. For instance, can AI think and plot like a hacker? Does it possess the capability to build defenses against increasingly clever threat actors?
The short answer is yes.
The Offensive and Defensive Implications of AI
When we think about AI and cybersecurity, we tend to think of the burgeoning examples of AI being used to perpetuate cybersecurity hacks and deepfakes that are increasingly making headlines around the world. In fact, 75% of security pros say they’ve witnessed an increase in cyberattacks with 85% of these attacks powered by generative AI.
But AI also holds the power for positive impacts on cybersecurity—transforming industries like healthcare and education and offering access to unprecedented insights and efficiencies.
As we consider how AI might be used to help thwart cyberattacks, it’s important to understand how criminal hackers think.
Inside the Hacker Mindset
First of all, the term hacker doesn’t inherently indicate a negative. Hackers are just people who focus on understanding how systems work, identifying vulnerabilities, and devising clever ways to leverage those vulnerabilities that can be exploited. If the hacker is good, then they use that knowledge to help address the vulnerabilities. But, if they are criminal hackers, they use that knowledge for their benefit—and our misfortune.
Hackers are curious and they don’t think like the rest of us tend to. They don’t think in straight lines. They pick up connections that most of us are likely to miss. Where most of us have a hard time thinking outside the box, hackers thrive. Hackers approach problems sideways, upside down, and inside out.
So how can we beat criminal hackers at their own game and leverage AI to help identify and minimize the risks posed by malicious actors?
Building Up AI-Powered Defenses
There are a number of ways AI tools can be leveraged to help in the battle against cyberattacks:
- Advanced threat detection. AI tools can be used to identify patterns and anomalies that aren’t likely to be picked up by mere humans, detecting risks before they become reality. For example, Honeywell developed an AI-driven platform that can quickly analyze enormous amounts of data and identify unusual patterns that indicate the potential of a cyberthreat.
- Fact-checkers on steroids. AI tools like Full Fack, ClaimBuster, and Chequeado are high-powered systems that cross-reference claims against massive databases of verified information to flag potential claims as misinformation in real time. Keep in mind that a lot of generally available online data can’t be trusted—it’s already been poisoned with massive amounts of misinformation. Verified databases used by systems like these are different.
- Deepfake detectors. While they’re not quite there yet, tools like Microsoft’s Video Authenticator and Deeptrace are being used to analyze pixel patterns and other cues to help spot AI-generated videos before they’re able to manipulate viewers and do harm. AI-detection filters are continuing to improve.
- Invisible watermarks. Companies are developing invisible watermarks for various types of digital content—from images, to videos, to text—that can make it easier to verify the authenticity of content through its own unique ID card.
- Predictive security measures. AI can be used to predict the potential for vulnerabilities based on historical data, offering the opportunity to take steps to combat this potential before damage occurs.
That’s really just the tip of the iceberg. New tech-driven opportunities to leverage AI’s power to think like a hacker are continually emerging. Keep in mind, though, that it’s not just technology that can be used as a tool to protect against cyberattacks. People are your front line of defense. Don’t overlook their potential.
People Power as Your First and Last Lines of Defense
As tech-based defenses continue to get stronger, cybercriminals increasingly target your human layer of defense. It’s incumbent on security teams to strengthen this human layer – to make them a strong and resilient defense against cyber threats. To get the most from your team, provide ongoing training, promote awareness, and ensure easy access to information. Encourage a security-focused culture by sharing effective strategies and learning from past experiences, fostering an open and transparent culture where successes and failures can be freely discussed and analyzed. This approach can prove more effective than even the most advanced technical controls.
Can AI think like a hacker? Absolutely, and its powers are evolving without end. Instead of viewing AI solely as a malicious force intent on compromising your data and systems, consider how this technology can be utilized to outsmart attackers. However, it’s best not to rely solely on AI. Simultaneously equip and prepare your team to play a significant role in your cybersecurity efforts.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.