YouTube was forced to release a statement last week warning users that fraudulent artificial intelligence (AI)- generated videos depicting their CEO Neal Mohan announcing changes in monetization were in circulation. The deepfake videos were sent out as private videos to the platform’s content creators in cynical attempts to scam them, install malware, and steal credentials.
In their statement, the YouTube team acknowledged the existence of the videos, reiterated that they would never attempt to contact users to share information via a private video, and provided some guidance.
Video Nasty
Targeted users received an email that looked like it was from YouTube and claimed a private video had been shared. The footage showcased a realistic deepfake of Mohan, convincingly mimicking his appearance, voice, and mannerisms.
The video then asked the targeted users to click a link to send them to a page where they were required to confirm their understanding and acceptance of updated YouTube Partner Program (YPP) terms by signing into their account to continue monetizing their content. This page was not set up to do that, though, and was, in fact, designed to obtain the users’ credentials for malicious purposes fraudulently.
AI Advancements Changing the Game
Sadly, social media scams are nothing new. Other significant large platforms such as LinkedIn and Facebook have seen users targeted over the years. However, the concerning difference we are seeing with attacks in recent times is in the utilization of AI technology to make attacks more prevalent, compelling, and convincing.
The recently released inaugural AI Safety Report highlighted worrying evidence suggesting a significant prevalence of AI-generated content online. They cited a study from the UK revealing that 43% of people aged 16+ said they had seen at least one deepfake (in the form of videos, voice imitations, and images) online in the last six months.
One authentication measure designed to prevent AI-generated fake content is ’watermarking,’ a process that involves embedding a digital signature into the content at the time of creation. Although watermarking techniques have proven effective in helping people determine the origin and authenticity of digital media, sophisticated adversaries are becoming more adept at removing watermarks.
Comments Section
In the wake of the YouTube incident, notable industry figures have provided expert analysis.
Nicole Carignan, Senior Vice President and Field CISO at Darktrace, believes that with attackers increasing adoption of new techniques, traditional threat-prevention methods are no longer sufficient to keep users safe. She advocates for organizations to “leverage AI-powered tools that can provide granular real-time environment visibility and alerting to augment security teams. Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the-loop modes, to accelerate security team response.”
The increasing redundancy of traditional tools as a method of combatting these contemporary AI-curated attacks is a sentiment echoed by J Stephen Kowski, Field CTO at SlashNext Email Security. He observes that “Generative AI and LLMs are enabling attackers to create more convincing phishing emails, deepfakes, and automated attack scripts at scale. These technologies allow cybercriminals to personalize social engineering attempts and rapidly adapt their tactics, making traditional defenses less effective.”
Welcome to The Future
As deepfake technology continues to evolve and AI-generated content becomes more difficult to distinguish from authentic content, individuals, organizations, and governments need to equip themselves with tools to combat this evolving threat effectively. Raising awareness through educational initiatives and sharing informative articles about deepfakes are two great places to start.
Adam Parlett is a cybersecurity marketing professional who has been working as a project manager at Bora for over two years. A Sociology graduate from the University of York, Adam enjoys the challenge of finding new and interesting ways to engage audiences with complex Cybersecurity ideas and products.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.