UCL Deepfake Threat Report, Industry Expert Reaction

By   ISBuzz Team
Writer , Information Security Buzz | Aug 04, 2020 09:03 am PST

UCL published a report showing that fake audio or video content is ranked as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, and I thought you’d be interested in input on how we can get ahead of this threat.

Subscribe
Notify of
guest
1 Expert Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Joe Bloemendaal
Joe Bloemendaal , Head of Strategy
August 4, 2020 5:06 pm

Deepfake technology is one of the biggest threat to our lives online right now, and UCL’s report shows that deepfakes are no longer limited to dark corners of the internet. The technology has already been used to impersonate politicians, business leaders and A-listers – hitting the UK political scene in last year’s general election. Now, we can expect to see deepfakes playing a major role in financial crime, as fraudsters try to use deepfaked identities to open new accounts or access existing ones.

Far from fighting fire with fire, banks and fintechs are turning to AI technologies to combat the threat. AI can be trained to spot even the deepest of fakes, thanks to facial recognition and behavioural biometrics. If a deepfake is suspected, the solution should instead turn to a unique digital ‘fingerprint’ – a record created based on how a person types and the sites they visit. This can then be used to verify a user’s identity or stop unauthorised access to devices or documents.

Incorporating advanced machine learning takes this one step further, constantly capturing the latest fraudulent applications and learning how to spot new fakes. With fraudsters emboldened by advances in AI and automation, adopting the latest biometric and AI technologies is critical to keeping them out. Only then can banks and financial institutions get ahead of the threat.

Last edited 3 years ago by Joe Bloemendaal

Recent Posts

1
0
Would love your thoughts, please comment.x
()
x