Last month, employees at the UK-based engineering firm, Arup, were tricked by a deepfake video of the company’s CFO into transferring $25 million to cybercriminals. This isn’t an anomaly. It’s further proof that social engineering has become cybersecurity’s most costly problem.
Today, more cybercriminals are launching AI-powered social engineering attacks targeting finance teams and executives, principally those in vendor-facing roles with access to funds and the authority to modify payment details and approve wire transfers. Whether it’s an AI-generated phishing campaign, a fake Invoice in payment initiation, or a deepfake impersonation, the success rate of these social engineering attacks is frightening–some experts, including those from the company KnowBe4, reporting that social engineering accounts for approximately 70-90% of all successful cybersecurity attacks.
A huge factor behind the success of these attacks is the use of Generative AI (Gen AI). Another is that many businesses still view these social engineering attacks through an email-only lens. For years, phishing and business email compromise (BEC) attacks have been the most prominent form of social engineering, first emerging in 2010 and officially recognized as a distinct threat by the FBI in 2013.
Then, in 2024, everything shifted. That’s when easy-to-use AI tools and models became widely available. These tools lowered the bar, allowing malicious actors with little to no coding skills to launch sophisticated social engineering attacks targeting an organization’s entire payment processes, workflows, and decision-making chains.
Here is what finance executives are facing today.
Deepfake Attacks & Executive Impersonation
In what seems like the blink of an eye, cybercriminals have embraced advanced techniques like Gen AI to clone voices and develop deepfake videos that create lifelike versions of company executives that can be undetectable to the human eye. A February 2025 report found that 68% of the deepfake content analyzed was nearly indistinguishable from genuine media. This helps explain why Deloitte predicts fraud losses will soar to $40 billion in the U.S. by 2027, up from the $12.3 billion stolen in 2023.
Exploitation of Trust
If you examine real-life social engineering attacks, you may pick up on a common theme—criminals tend to impersonate senior executives. That’s because victims are more likely to trust leadership and, as a result, bypass standard review protocols for what appears to be an urgent or high-priority request.
Pressure and Urgency
If impersonating a senior executive isn’t enough, attackers will often tap into a bit of human psychology by adding urgency to their request. This false sense of urgency usually impacts the victim’s ability to think critically while adding a layer of anxiety that makes it far more challenging for them to spot inconsistencies.
Weak Vendor Verification
Attackers impersonate trusted entities like vendors, suppliers, or company executives, crafting fraudulent invoices that easily sneak past weak vendor verification systems that depend on human intervention, lack adequate identity verification, often operate using outdated vendor data, and more. These invoices use fake email addresses or spoof legitimate ones to appear authentic. It’s also common for criminals to steal email login credentials, which allows them to send invoices from an actual employee’s account, bypassing traditional authentication methods with ease.
So, where does this leave businesses and finance teams that are clearly overmatched? To combat these threats, businesses must implement AI-driven fraud prevention solutions that leverage Behavioral AI to analyze transaction patterns, detect anomalies, and stop fraud before it occurs. Some key features can include:
Strengthening Fraud Prevention with AI-Powered Security
Comprehensive, AI-Driven Fraud Detection
Fraud is no longer just an email issue. Cybercriminals exploit weaknesses across email, payments, and vendor interactions. Businesses need security solutions that integrate these data points to detect irregular patterns in real time—monitoring workflows, approvals, and behavioral changes to stop fraud before funds are transferred.
Proactive Monitoring of High-Risk Roles
Finance teams, executives, and vendor managers are prime targets. Security teams must continuously track behavioral shifts, unusual transactions, and emerging deepfake threats—such as unexpected login locations, altered typing patterns, or frequent new bank accounts.
Holistic Verification Beyond Email
Outdated manual verification isn’t enough. To prevent unauthorized transactions, businesses must validate all payment requests using multi-factor authentication, vendor verification, and AI-based fraud scoring.
Real-Time Alerts & Adaptive Threat Detection
Fraud prevention must be proactive, not reactive. AI-powered systems should deliver real-time alerts, risk-based authentication, and adaptive anomaly detection to neutralize threats before they cause damage. By leveraging AI and behavioral analysis, businesses can stay ahead of evolving fraud tactics and safeguard their financial operations.
Final Thoughts
AI-powered fraud is not a future threat; it’s happening now, and businesses leaning on traditional security methods are already behind. With deepfake attacks growing in sophistication and scale, closing the gap and getting the upper hand must be the priority for businesses. That means transitioning from reactive security approaches to proactive fraud prevention that stops AI-powered social engineering and protects a business’s most valuable asset: its money.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.