By 2027, AI agents are expected to reduce the time required to exploit account exposures by 50%. This was revealed in Gartner’s new report, titled: “Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity.”
Jeremy D’Hoinne, VP Analyst at Gartner, says account takeover (ATO) is a persistent attack vector as weak authentication credentials, including passwords, are gathered in a slew of ways, including data breaches, phishing, social engineering, and malware. “Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms.”
According to the analysts, AI agents will facilitate automation across more aspects of ATO, from deepfake-powered social engineering to fully automated credential abuse. In response, vendors will need to develop products for web, app, API, and voice channels to detect, monitor, and classify interactions involving AI agents.
“In the face of this evolving threat, security leaders should expedite the move toward passwordless phishing-resistant MFA,” comments Akif Khan, VP Analyst at Gartner. “For customer use cases in which users may have a choice of authentication options, educate and incentivize users to migrate from passwords to multidevice passkeys where appropriate.”
The Rise of Social Engineering Attacks
Technology-driven social engineering is also expected to become a major threat. The analysts forecast that by 2028, 40% of social engineering attacks will target executives and the wider workforce.
Malicious actors are increasingly using a combination of social engineering tactics and spurious reality techniques—such as deepfake audio and video—to fool employees during calls. Detecting deepfakes is a huge challenge, too, particularly across the various attack surfaces of real-time voice and video communications.
Although only a few high-profile cases have emerged so far, Gartner says these incidents have demonstrated the credibility of the threat and led to significant financial losses for affected entities.
“Organizations will have to stay abreast of the market, and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques,” adds Manuel Acosta, Sr. Director Analyst at Gartner. “Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step.
Faster, Cheaper, More Convincing
James Scobey, Chief Information Security Officer at Keeper Security says deepfakes are a particular concern, as AI models make these attack methods faster, cheaper and more convincing. As attackers become more sophisticated, the need for stronger, more dynamic identity verification methods – such as multi-factor authentication (MFA) and biometrics – will be vital to defend against these progressively nuanced threats. MFA is essential for preventing account takeovers.
“Generative AI will play a dual role in the identity threat landscape this year. On one side, it will empower attackers to create more sophisticated deepfakes – whether through text, voice or visual manipulation – that can convincingly mimic real individuals. These AI-driven impersonations are poised to undermine traditional security measures, such as voice biometrics or facial recognition, which have long been staples in identity verification. Employees will, more and more frequently, receive video and voice calls from senior leaders in their organization, telling them to grant access to protected resources rapidly. As these deepfakes become harder to differentiate from reality, they will be used to bypass even the most sophisticated security systems,” Scobey adds.
Lowering the Skill Barrier to Entry
The challenge now is that AI can be used to reduce the skill barrier to entry and speed up production to a higher quality, says Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace. “Since the sophistication of deepfakes are getting harder to detect, it is imperative to turn to AI-augmented tools for detection as people alone cannot be the last line of defense.”
To combat emerging challenges from AI-driven attacks, she says entities should leverage AI-powered tools that are able to provide granular real-time environment visibility and alerting to augment security teams. “Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the loop modes, to accelerate security team response. Through this approach, the adoption of AI technologies—such as solutions with anomaly-based detection capabilities that can detect and respond to never-before-seen threats—can be instrumental in keeping organizations secure.”
AI is set to have a major impact on security teams and will alter existing operations, continues Carignan. “However, when applied responsibly and with the right programmatic approach, AI will help upskill the cyber workforce, rather than deskill it. The use of AI will enable security leaders and teams to apply bespoke security strategy implementations aligned to their risk concerns and priorities, as well as free up the cybersecurity workforce to pivot to areas that are more difficult and complex.”
The Cyber-Arms Race
Cybersecurity has always been an arms race between bad actors using the latest technologies to exploit new victims in innovative and interesting ways, while defenders try to stay a step ahead of new threats before and as they emerge, often using the same new technologies, says Andrew Bolster, Senior R&D Manager at Black Duck.
“AI tooling is just such a technology and both sides are applying these to identify, target, and execute (or detect, defend, and protect people from) these threats. This recent family of masquerading attacks are blurring the line between ‘vulnerability engineering’ and ’social engineering’, playing on the human element to get around even the most rigorous security controls that operators put in place,” Bolster adds.
As with any social engineering attack, it is best to lean on strong security practices, even if it comes at an apparent cost of time, Bolster says, adding that it’s better to let an invoice be paid late than to compromise the company because your boss messaged you on LinkedIn ‘because they forgot their work phone’.
Overwhelming Human Teams
Automation is absolutely driving AI adoption in security, particularly for threat detection, email security, and real-time analysis of user behaviors that would overwhelm human teams, adds J Stephen Kowski, Field CTO at SlashNext. “The most successful implementations automate the identification of advanced phishing, business email compromise, and account takeover attempts—detecting threats before they reach users and eliminating the need for manual investigation of every suspicious message.”
The main worry isn’t that AI is after our jobs but instead, entities implementing AI solutions without proper oversight, ends Kowski. This will create blind spots where security teams believe threats are being caught when sophisticated attacks might still slip through security nets that aren’t continuously learning from new threat patterns.
Information Security Buzz News Editor
Kirsten Doyle has been in the technology journalism and editing space for nearly 24 years, during which time she has developed a great love for all aspects of technology, as well as words themselves. Her experience spans B2B tech, with a lot of focus on cybersecurity, cloud, enterprise, digital transformation, and data centre. Her specialties are in news, thought leadership, features, white papers, and PR writing, and she is an experienced editor for both print and online publications.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.