DeepFakeNess

By   Professor John Walker
Visiting Professor , Trent University (NTU) | Oct 11, 2019 02:17 am PST

On the 8th of October the BBC ran the concluding episode of the drama ‘Capture’ staring  the impeccable Holliday Grainger (see Fig 1) – a drama which introduced the armchair viewer to the world of the Intelligence Services, the fight against terror and the manipulation of images to correct the power of prima facie evidence. For those sharp-eyed amongst the viewers, I am certain that in this final episode they will have also noticed the utilisation of DeepFake imagery – the question is, what is DeepFake, how is it used, and what are the overall implications?

Fig 1 – Capture

What is DeepFake – How is it Used: DeepFake is a conjoined methodology and application which is based on Neural Networks AI/DL (Artificial Intelligence/Deep Learning) employed to take input from multiple scans of a presented facial aspects, and to then perform multi-faceted analysis in order to produce a quality end-product which may be overlaid on the original image within the video under conversion. 

Such is the power of this DeepFake technology; all the handcrafting is far removed from the human operator with the AI/DL doing all the leg work to some considerable accuracy. However, the power of DeepFake does not stop here – as there are two key components within this technology working hand-in-hand, which are the ‘Generator’ and the ‘Discriminator’ which represent the AI/DL components which make up the ‘GAN’ (Generative Adversarial Network). It is these two powerful and intelligent component’s which are employed to move toward the manifestation of falsified perfection, with the ‘Generator’ producing the end-product, and the ‘Discriminator’ conducting what is ostensibly a QA – should the ‘Discriminator’ assess the end product as falling short of quality expectations,  the AI/DL will then learn from the failure and add the acquired knowledge AI/DL engine. Augment with this the ability to inbuild lip-synced artificial voices, and the successful end-product can be a very realistic falsified representation of truth which will fool most viewers. 

In my Cyber Security, and Digital Forensics Courses which I run in the Middle East, I have for some time been employing GAN generated images to accommodate class-work – and here below at Fig 2 is one such example which is floating around the Internet as I write.

FIG 2 – GAN Generated Image

The worrying factor here of course is, this is open-source technology which is not that hard to locate and acquire from families of DeepFake tool/application producers, so the risk based on the potential end-user base increases by the factor of the unknown – and of course to what end? 

Implications: The overall implications of such technology as DeepFake come down ultimately to trust. We have all come to terms with the advent of Fake-News, and have seen the implications in real-time where, it has been implied that the outcome of elections have been driven by outside powers to leverage a desirable outcome. We have seen the world of Geopolitics fall under the hand of unscrupulous dictators who wish to mask reality with fiction. But here we see the very facet of reality being replaced with, well what looks like reality! The question is of course, the next time there is a high profile (or even low-profile trail) which is presenting first-hand CCTV (Video) as best evidence, is that artifact to be trusted? Should it be challenged and discredited as being potentially manipulated? We thus enter yet an era in which we may no longer comment ‘Seeing is Believing’ – A picture can tell a thousand words or utter a million lies!

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x