As we strive to “Secure Our World” this Cybersecurity Awareness Month, a few irrepressible haunts keep rearing their ugly heads. Here are some of the most malicious monsters hiding under our proverbial cybersecurity beds and what we need to know to stay safe this season.
AI-generated misinformation
From a fake social media Tom Cruise (old news) to a more recent – and serious – slew of political spoofs, visual fakes are being weaponized by anyone with access to cheap Artificial Intelligence (AI). Here are some real-life frights:
- A phony film of Moldova’s pro-Western president backing a Russian-friendly political party.
- Spoofed audio of Slovakia’s liberal party raising the price of beer.
- A deep-faked video of a lawmaker in conservative Muslim Bangladesh wearing a revealing swimsuit.
AI-generated misinformation is so alarming because AI regulation is still nascent, and there are no watermarks denoting what is real and what is not (at least not across the board, and at least not yet). Without careful and critical investigation, people en masse could be led down false roads based on information that simply isn’t true. As technology improves, it is getting harder to see what is behind the mask.
In a recent report by CSIS, Artificial Intelligence and National Security: The Importance of the AI Ecosystem, important national security issues are discussed that could keep these problems at bay: operationalization of AI, ethical AI usage, international approaches to AI in national security, and more.
AI-generated phishing
Now, phishing is cannier than ever, thanks to generative AI capabilities. Before, you had to actually know the language, be able to write it, or have exceptional luck with an online translator to write a convincing phishing email in a language that wasn’t yours.
Now, all today’s attackers have to do is employ generative AI models like ChatGPT to seamlessly create convincing copy in any language. This not only improves individual chances of “success” but opens up whole markets that were once closed to (or safe from) the reach of criminal phishing operations.
AI-powered voice spoofing
The internet is rife with relatively new AI voice changers (“absolutely free!”), and it’s only a matter of time before one of those uncannily contrived robocalls comes to us. Now, when we pick up the phone, we may very well be speaking with an AI chatbot, enhanced with a humanoid, AI-generated “voice,” speaking from an AI-generated script.
Attackers use voice cloning throughout the attack lifecycle for initial access, lateral movement, and, unsurprisingly, privilege escalation. Imagine getting a call from your boss requesting that you grant the new system administrator write access instead of read-only. Wouldn’t you do it?
That’s what most voice scam cybercriminals are counting on.
Business Email Compromise (BEC) attacks
In the FBI’s 2023 Internet Crime Report, BEC was held responsible for $2.9 billion in adjusted losses – an amount 49 times higher than losses attributable to ransomware, the typical “villain” of cybersecurity stories.
A BEC attack is a financially motivated spear phishing attack in which criminal actors target employees to get them to send money to fraudulent accounts. Usually posing as a fellow employee or even the victim’s boss, these fraudsters have only benefitted from AI’s powerful, deep fake visual and auditory capabilities.
AI: Partnering for Protection
It’s time to beat well-equipped attackers at their own game. Two can play at the AI gambit, and it’s about time threat actors get the scare for once.
To this end, global companies have been developing trusted AI systems, asking questions like, “How can AI help us make better decisions at critical moments? How do we build AI systems we can rely on?” AI is already being used to bring additional value to Big Data, and it looks as if that same productivity will be maximized in the security field.
For example, Thales has teamed up with the French Alternative Energies and Atomic Energy Commission (CEA) to move research in the field of generative AI. Their goal? To deliver trusted sovereign AI solutions. Says Bertrand Tavernier, CTO for Thales’ Secure Communications and Information Systems business:
“This partnership with the CEA’s AI teams will combine the power of their research with our work at Thales’s AI accelerator, which brings together the Group’s technological expertise and deep knowledge of the defence and security sectors. Our customers — governments, armed forces, critical infrastructure operators — need trusted, sovereign generative AI solutions for their critical missions.”
Although the creepier uses for AI abound, big tech prowess in AI research leaves nothing to fear. They are Europe’s top applicant in the field of AI for critical applications and have successfully infused AI into over one hundred of its solutions, boasting:
- More than 600 AI experts
- Roughly 100 AI doctoral students
- A top-tier network of partners across academia, industry, and entrepreneurs
Those interested in boosting their chances of survival this Cybersecurity Awareness Month would do well to study AI’s frightening applications, use a critical eye when encountering online media, and investigate the lineup for more AI-powered defense.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.