Artificial Intelligence (AI) has transformed the way we create and share images. But this technology can also be used to create and spread disinformation, as shown by the false images of the explosions in front of the Pentagon and the White House recently posted on Twitter..
The power of AI and its dangers for disinformation
AI has opened up new perspectives in image creation. Sophisticated algorithms are capable of generating realistic and convincing visuals, making it difficult to distinguish between the real and the fake. This power can be exploited for malicious purposes, such as spreading false information, creating forgeries and manipulating public opinion.
The recent fakenews of the explosions in front of the Pentagon and the White House, relayed on Twitter, illustrates this problem perfectly. These doctored images have sown confusion, and the speed with which this false information has spread highlights the limits of traditional source verification, and underlines the urgency of finding solutions to counter this misinformation.
Counter-attacking AI-generated forgeries
To effectively counter false images generated by AI, it is essential to develop advanced detection methods and establish collaborations between researchers, AI experts and the media. Detection algorithms must be constantly improved to adapt to new manipulation techniques.
Educating the public about these techniques and their risks is essential. Internet users need to be aware of the possibility of false images, and be encouraged to check sources, cross-reference information and not rely solely on what they see online. There are many clues to suggest that these are false images: imperfections, distortions, and above all the fact that no other images or viewpoints are reported to corroborate this information.
In the fight against fakes, AI can be an ally as well as an enemy: it can produce a fake, but it can also detect them. The support of a human eye may sometimes be necessary to detect the presentation of a false image: the problem is also present in remote customer onboarding, where the user may present false documents or even a false identity.
Artificial Intelligence offers many opportunities, but it can also be a weapon for misinformation and the creation of falsehoods. To counter these false images, it is essential to develop advanced detection techniques, educate the public and reinforce the responsibility of social media. Only a multidisciplinary approach can meet this growing challenge.