Why Deepfake Technology Isn’t Just Confined To The Political Sphere

By   Labhesh Patel
CTO and Chief Scientist , Jumio | Nov 06, 2020 03:11 am PST

The 2020 U.S. presidential election once again brought to light the emerging threat of deepfakes, a concern that has also been expressed by the World Economic Forum. Prior to Election Day, the organisation stated that deepfakes could have an “unprecedented impact on this election.” Election results aside, over the last few years, deepfakes have infiltrated every aspect of society from pornography to politics. While some are used in more sinister ways than others, the political ramifications of deepfake videos are deeply concerning. 

A deepfake today uses AI to combine existing imagery to replace someone’s likeness, closely replicating both their face and voice. Essentially, a deepfake can impersonate a real person, making them appear to say words they have never even spoken. Worryingly, the number of deepfake videos online has grown by 330% since July 2019 to nearly 50,000. While deepfake technology is something that, for now, the public and businesses may have only read about in other industries, there is the very real possibility that fraudsters will weaponize this technology for their own gain, bringing this modern-day threat to their own doorsteps.

Political disinformation: a historical norm

Before looking into the potential for deepfakes to infiltrate other areas, we should pause to briefly look back at their roots in political history. Indeed, for many years, politicians and leaders have used disinformation to deceive and manipulate the masses for a specific outcome. If we look back at the 20th century Soviet Union under Joseph Stalin, we can see that deepfakes were widely used to misinform the public. Additionally, he invented a ‘society’ that worked to protect his façade by even going so far as to have fake trials, elections and trade unions, to support his own narrative to the heavily media-censored Soviet Union.

In more recent times, we all remember the Obama administration and the onset of the mass realisation of deepfakes used to spread misinformation on a substantial scale. Going beyond a hoax, and verging on corruption, the use of the technology came under scrutiny. Looking at the proliferation of fake political news today on social media sites, we can see parallels between Stalin’s methods and more modern deception methods. The difference now is that technology means that not only do those in power have access to creating this level of disinformation, but so do the public. Ordinary members of the population with the ability to use this technology could now sway the public vote through proliferation of deepfakes on social media.  

With the 2020 presidential election, there was concern that effective deepfakes could alter public perception more than typical misinformation. The fear was that it could lead to the creation of false intelligence and impact the public’s view of law enforcement or undermine the validity of the election results.

The broader implications

Other than the long-term impact on how societies operate, the potential for deepfakes goes beyond just the hoodwinking of individuals to believe in certain views. They can also have direct monetary ramifications both for individuals and organisations if weaponized by fraudsters.

Wherever there are substantial amounts of capital, one must also expect multiple efforts to swindle the gatekeeper. Take the financial services sector as an example. Banks store a vast amount of customer data and a breach of this information, and/or their assets, can have detrimental effects on all involved. When data is breached, the consumer can potentially lose assets if cybercriminals can access their accounts and drain their funds. Obviously, the consumer loses faith in the institution and is unlikely to recommend the bank to a friend or colleague. But arguably, far worse is the impact on the organisation itself. They run the risk of having to replace customer funds, incurring penalties, and losing public trust in their service, all of which have the potential to lead to the demise of any company.

It’s no wonder that banks, and other organisations alike, are now on the lookout for robust cybersecurity solutions to minimise the probability of a breach taking place. For example, many banks require a government-issued ID and a selfie to establish a person’s digital identity when creating a new account online. But this is where the threat of deepfake technology makes an appearance: a criminal could use the technology to create a spoof video and bypass the selfie requirement when creating new online accounts. 

Arming against the threat

Deepfakes will continue to become more advanced. In fact, now deepfakes can not only impersonate real people, but also create completely unique people who look and sound real but in fact are completely fabricated. Already these so-called “fake faces” have been identified in bot campaigns from China and Russia.

With deepfakes already able to bypass security measures, without even thinking about the damage “fake faces” could cause, it’s clear that more sophisticated identity verification solutions are required. The answer is to invest in solutions that have embedded liveness detection able to spot spoofing attacks.

Liveness detection requires the user to perform eye movements, nod their head or repeat words or numbers. But in some cases, deepfakes can circumnavigate this, so it’s important to invest in liveness detection to ensure it can discern real videos and selfies from deepfakes masquerading as legitimate selfies.

Deepfakes are becoming more ingrained in our society, so much so that we may find them adding us as friends on Facebook, following us on Twitter or speaking on YouTube as a member of the public. As this threat becomes more and more prolific, it is vital that companies use the best liveness detection to protect them from the threat of this rapidly advancing technology. 

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x