Cybercriminals are now perfecting deepfakes to impersonate people to steal money and anything that might be valuable. The technology has been improved to reach a higher level where it becomes difficult tell the difference between a fraud or friend.
While an attacker can use deepfake techniques to convincingly emulate the likeness of an individual, it still difficult to digitally impersonate ones voice without fairly obvious imperfections. Deepfake audio or video cannot currently be rendered in real time, without an attacker having a large volume of computing resources and a lot of high quality audio and video source material to train computer machine learning algorithms. While Deepfakes can be convincing to other humans, they are unable to pass physical or passive biometric verification, so coupling strong liveness detection, along with the collection of passive and physical biometric signals to verify a user’s identity, largely mitigate the current risks presented in banking transactions.