Deepfake Videos: When Good Tech Goes Bad

By   ISBuzz Team
Writer , Information Security Buzz | Nov 08, 2019 06:11 am PST

More than a decade ago leading UK investigative journalist Nick Davies published Flat Earth News, an exposé of how the mass media had abdicated its responsibility to the truth. Newsroom pressure to publish more stories, faster than their competitors had, Davies argued, led to journalists becoming mere “churnalists”, rushing out articles so fast that they could never check on the truth of what they were reporting. 

Shocking as Davies’ revelations seemed back in 2008, they seem pretty tame by today’s standards. We now live in a post-truth world of Fake News and ‘alternative facts’; where activists don’t just seek to manipulate the news agenda with PR but now use advanced technology to fake images and footage. A particularly troubling aspect of these ‘deepfake’ videos is their use of artificial intelligence to fabricate people saying or doing things with almost undetectable accuracy. 

The result is that publishers risk running completely erroneous stories – as inaccurate as stating that the world is flat – with little or any ability to check their source material and confirm whether it is genuine. The rise of unchecked fakery has serious implications for our liberal democracy and our ability to understand what’s truly going on in the world. And while technology has an important role in defeating deepfake videos, we all have a responsibility to change the way we engage with the ‘facts’ we encounter online.

 Faking the news 

The technology to manipulate imagery has come a long way since Stalin had people airbrushed out of history. Creating convincing yet fake digital content no longer requires advanced skills or a well-resourced (mis)information bureau. Anyone with a degree of technical proficiency can create content that will fool even the experts. 

Take the faked footage of Nancy Pelosi earlier this year, which was doctored to make her look incoherent and was viewed two and a half million times before Facebook took it down. This story shows how social media is giving new life to the old aphorism that “a lie can go halfway around the world before the truth has a chance to put its boots on”. 

The propagation of lies and misinformation is immeasurably enhanced by platforms like Twitter and Facebook that enable virality. What’s more, the incentives for creating fake content now favour malicious actors, with clear economic and political advantages for disseminating false footage. Put simply, the more shocking or extreme the content, the more people will share it and the longer they will stay on the platform. 

Meanwhile, counterfeiters can manipulate the very tools being developed to detect and mitigate deepfake content. Just as the security industry inadvertently supplies software that can be misused for cybercrime, so we risk the emergence of a parallel media industry – one focused on obfuscation and lies. 

Combating the counterfeiters 

None of this is inevitable. There are plenty of advanced tools for detecting faked content, including machine learning algorithms that analyse an individual’s style of speech and movement, known as a “softbiometric signature”. Researchers from UC Berkeley and the University of Southern California used this technique to identify deepfakes – including face swaps and ‘digital puppets’ with at least 92% accuracy 

Technology such as AI, machine learning and generative adversarial networks are, of course, crucial in the fight against deepfakes, but just as important is that we all learn to think critically about the content we view. 

Sadly (but necessarily) we all need to get better at questioning the provenance of videos, articles and imagery. In many cases, this can be as simple as never sharing content that we haven’t actually read or watched ourselves – something that six in ten of us do. But we also need to interrogate what we consume, for example by investigating metadata. If you watch a video titled “Boris Johnson in Sri Lanka, June 2014”, it’s well worth checking out that he was actually there during that month. 

Similarly, snippets can be misleading; it’s always worth watching extended or complete segments, as the Lincoln Memorial video incident shows. And always resist the temptation to pile on when something goes viral: taking the time to investigate content properly can save you from serious embarrassment or even prosecution for defamation. 

Sophisticated as deepfake technology has become, there are still some telltale signs that the vigilant viewer can use to identify footage that has been manipulated. These include infrequent or entirely missing eye-blinking; odd-looking lighting and shadows; discoloration, blurriness and distortion; and a failure perfectly to sync sound and video (“read my lips”).

Unite for truth 

It may be everybody’s responsibility to check content before we share it widely on social platforms, but it will take much more than individual effort if we are to stamp out the scourge of deepfakes. 

At the moment, the odds are stacked firmly towards the fakers. As digital forensics expert Hany Farid points out, for every person involved in detecting misleading content there are another 100 creating it. 

Improved regulation of media platforms and other publishers will be key, with sufficient sanctions against content creators who have a record of creating and disseminating fake content. We are seeing regulators beginning to grope towards a solution, for example with the DEEPFAKES Accountability Act. Problematical as this proposed legislation may be, it at least shows that lawmakers are aware of the problem and are committed to tackling it.

To counter the threat of deepfakes effectively, we need to see much better data sharing so that regulators and researchers can fully understand the nature of the challenge, build better solutions and craft truly effective regulations. 

This battle won’t be won quickly or easily, but it’s one that we all need to fight. Everyone can do their bit by remaining vigilant to the threat of faked content. If we train our brains to think critically about everything we consume online, we can all help to minimise our involvement in sharing counterfeit content.

So the next time you see a video that shocks or surprises you, do a little background digging and watch out for the signs of fakery before you share it – and tell your friends and followers whenever you find content that’s as fake as the claim that the world is flat.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x