Deepfakes are virtual masks used to impersonate people. They are becoming more and more widespread on the Internet, and it has to be said that their creation is becoming easier with the advance of Deep Learning and Artificial Intelligence technologies. What are the risks? What does the future hold for them? In this article, we’ll look at these issues in detail.
What is a deepfake?
The term deepfake refers not only to the content thus created, but also to the technologies used it is a contraction of “Deep Learning” and “Fake”, which can be translated as “false depth” referring to false content that is made deeply believable. It usually takes the form of a video or audio recording created or modified using artificial intelligence. In 2014, researcher Ian Goodfellow invented the technique behind deepfakes, GAN (Generative Adversarial Networks). This technology uses two algorithms that train each other: one tries to make counterfeits as reliable as possible, while the other tries to detect fakes. In this way, the two algorithms improve together over time thanks to their respective training.
Since autumn 2017, deepfakes have become increasingly widespread. In 2019, Deeptrace researchers counted around 15,000 deepfake videos online, compared with fewer than 8,000 a year earlier. Audio deepfakes are still relatively uncommon, as their creation requires significant hardware resources.
What are the risks associated with virtual masks?
Deepfakes can be used for malicious purposes, such as manipulation, disinformation, humiliation, defamation and blackmail. For example, a deepfake can be used to create a video in which a person appears to say or do something he or she has never said or done. Deepfakes can also be used to fool facial recognition systems and to impersonate someone else. This can occur, for example, during remote customer onboarding procedures, when a cybercriminal attempts to impersonate a person.
Once physical, masks attempting to reproduce other people’s faces are modernizing and becoming virtual: simpler, quicker to make, and less detectable. The threat is real, and realism should improve with time and technological advances. This could cause significant damage to the public image and privacy of those targeted.
Detecting and preventing deepfakes is therefore a major cybersecurity challenge for organizations and public authorities.
What does the future hold for cybersecurity?
Deepfakes are a growing problem, especially on social networks, where they are exposed to the greatest number of people and can spread quickly and easily. To counter this threat, some solutions are able to apply a filter to videos to prevent their exploitation by software that can generate deepfakes. Facebook’s FAIR laboratory is working on this “de-identification” project.
In the field of customer onboarding, the ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information) has deployed the following to counter this threat the PVID repository (Prestataire de Verification d’Identité à Distance – Remote Identity Verification Provider), to guarantee identities even at a distance. The requirements of the standard counter attacks by presentation (pre-recorded videos, photos) and deepfakes. In fact, relationship entry paths must incorporate a unique random “challenge”. The user is asked to perform a specific action that cannot be foreseen beforehand, making the course “non-replayable” and thus avoiding these threats.