Will Meta’s Ai Supercomputer Combat A New Breed Of Cyberfraud? Expert Reaction

Following the news that Meta has plans to develop the “world’s most powerful AI supercomputer”, many are asking – will the language translation and image recognition it boasts of really be able to spot fraudsters, fight spoofs and ensure the safety of users in the Metaverse?

Experts Comments

January 25, 2022
Alexey Khitrov
CEO
ID R&D

Fraud, fake news and malicious actors. There are many security risks posed by the metaverse or any environment where people hide behind avatars. We can’t be sure they are who they say they are unless the metaverse provider can stop bad actors from signing up. If they can’t, we’ll likely see the spread of misinformation infiltrate the metaverse too. People signing up with fake identities and hacking into your account pretending to be you, or young people easily working around age

.....Read More

Fraud, fake news and malicious actors. There are many security risks posed by the metaverse or any environment where people hide behind avatars. We can’t be sure they are who they say they are unless the metaverse provider can stop bad actors from signing up. If they can’t, we’ll likely see the spread of misinformation infiltrate the metaverse too. People signing up with fake identities and hacking into your account pretending to be you, or young people easily working around age restrictions are all security risks.  

To weed out the fakes from the real people, metaverse providers will need strong identity verification, both in the sign-up process and continuously as the platform is used. This can be achieved through deploying facial recognition technologies. Facial recognition has been an issue at the heart of Metaverse announcements; Meta's change of identity came alongside the news that it will end the arguably unethical use of its facial recognition software on Facebook. With the eyes of the regulatory world upon them, any such technologies must have clear opt-in or -out features.  

Biometrics are the key here. Biometrics provide value through the security they bring, which includes matching the person behind the avatar to a genuine identification document. Next, liveness technologies check that the face is indeed present behind the camera and not spoofed in paper, mask or electronic form. This should be part and parcel with a non-intrusive and frictionless user experience that makes people’s lives easier (avoiding PINs, passwords and multi-step security checks). Finally, transparency must give control of the biometrics to its rightful owners - the people themselves.  

Today, all companies, and especially those in Big Tech, are challenged to build trust through stronger security while delivering convenience too. Facial recognition provides fast and highly reliable authentication for verifying that people are who they say they are, while liveness detection prevents fraud. Today’s security-conscious society demands that companies work harder than ever to verify identity and authenticate the liveness of a photo or voice, as fraudsters diversify their methods to spoof systems.

  Read Less
What do you think of the topic? Do you agree with expert(s) or share your expert opinion below.
Be part of our growing Information Security Expert Community (1000+), please register here.