How can we spot a deepfake? - DEEP
In our highly digitalised societies, we are constantly exposed to a wide variety of content. Whether it’s for information, education or entertainment, we are always reading texts and viewing images and videos online. But have you ever wondered if what you’re looking at is truly authentic and honest? How can you be sure that the person you see in an image is actually in a real situation? How can you be sure that the person you see speaking on a video is who they claim to be, and that the content has not been edited for the purposes of manipulation? In the age of artificial intelligence and machine learning, these questions are far from trivial. For several years now, the concept of deepfakes (a portmanteau of “deep learning” and “fake”) has been stirring up fears.
Distinguishing truth from fiction
“The question of forgery has long preoccupied mankind. The emergence of deepfakes brings us face to face with the digital version of counterfeiting," comments Mohamed Ourdane, Head of Cybersecurity at DEEP. AI and deep learning technologies make it possible to create content from scratch that warps reality, capable of portraying people without their consent in almost any situation.
Image processing tools such as Photoshop have long been used to retouch reality, usually for aesthetic reasons. With AI, we are now seeing images and videos created from scratch featuring famous people – the Pope, political leaders, celebrities – without their involvement or consent. Today, there are tools that can very accurately reproduce a person’s voice. Avatars, highly realistic simulations of existing human beings, can even interact with you. “Recent technological developments and easy access to significant computing power mean that anyone can generate extremely convincing fakes quite easily. In the future, it will become increasingly difficult to distinguish the real from the fake,” comments Cu D. Nguyen, Data Science and Security Expert at DEEP.
New risks
These developments give rise to new risks. In the future, we could all find ourselves faced with fakes created for malicious purposes. “Intrinsically, a technology is neither good nor bad," says Mohamed Ourdane. “It all depends on how you use it. However, when we talk about deepfakes, it’s easy to come up with any number of ways to use the technology for malicious purposes: spreading false information, compromising democracy or even scamming someone in order to steal data as part of a cyberattack.”
In recent years, for example, there has been much talk about CEO fraud. For example, someone posing as a company director sends an e-mail asking an operator in the accounts department to make an urgent transfer (to the scammers, of course). Such attempts will be all the more convincing if the request is made by phone, using computer tools to simulate the voice of the CEO in question. If we take this a step further, we even could imagine the request being made to the poor operator by video call, using an avatar that accurately reproduces the appearance and voice of the CEO.
Understanding these new dangers
While the risks are real and numerous, we need to consider ways of mitigating them. How can we combat the growing number of fakes? The teams in charge of cybersecurity at DEEP have been tackling this problem for several years, supporting research and development programmes in collaboration with the University of Luxembourg, in particular its Interdisciplinary Centre for Security, Reliability and Trust (SnT).
• Providing proof of authenticity and identity
“Faced with the risks associated with the proliferation of digital fakes, there are several options available to us," says Mohamed Ourdane. “These include associating the creation of content with an identity by integrating location data and timestamps, thus guaranteeing the authenticity and integrity of content throughout its lifecycle.” This can be seen in DEEP’s SKYTRUST project, which was carried out in partnership with the European Space Agency (ESA).
• Using technology to detect fakes
“Emerging technologies such as artificial intelligence and machine learning are not just useful for creating fake content: they can also be used to identify fake content. This is done by analysing the information in order to identify certain faults. We are working on such detection tools through a research and development project in partnership with SnT,” comments Cu D. Nguyen.
For example, by analysing facial contours, how a person blinks and their voice inflections, it is possible to detect anomalies and help identify AI-generated fakes. “The algorithms used by content producers are becoming increasingly sophisticated and effective, making it increasingly complex to identify fakes. We must constantly adapt in order to be able to distinguish truth from fiction. It's a real technological arms race.”
• Raising awareness to combat the spread of counterfeits
Faced with the risks associated with these fakes, the final challenge – and beyond “proof of ownership or identity” and increasing the capacity to detect AI-generated content – lies in the resources needed to combat their spread. “We must educate the public and raise awareness of these new risks. More than ever, we need to strengthen everyone's critical thinking, especially among the younger generation, who are particularly fond of the online world," says Mohamed Ourdane.
DEEP has been at the forefront of these issues for several years, working in collaboration with the University of Luxembourg. The aim is to provide security solutions for users of technological and telecommunications resources.
This topic will be discussed at an event organised by DEEP on 23 May.
Contact us
Do you have any questions about an article? Do you need help solving your IT issues?
Contact an expertOur experts answer your questions
Do you have any questions about an article? Do you need help solving your IT issues?
Other articles in the category Cybersecurity
DDoS attacks in Luxembourg in 2024
Discover the statistics of DDoS attacks detected in Luxembourg in 2024 by POST Cyberforce.
Published on
31 March 2024
DDoS attacks in Luxembourg in 2023
Discover the statistics of DDoS attacks detected in Luxembourg in 2023 by POST Cyberforce.
Published on
15 February 2023
DDoS attacks in Luxembourg in 2022
Discover the statistics of DDoS attacks detected in Luxembourg in 2022 by POST Cyberforce.
Published on
11 October 2022