RSF - Reporters sans frontières

10/08/2025 | Press release | Distributed by Public on 10/09/2025 03:40

David Colon : “Generative AI has enabled Russian propagandists to scale up some of their campaigns”

Generative artificial intelligence, like social media at its outset, has quickly become a weapon for Russian propagandists. David Colon, a researcher at the French National Centre for Scientific Research (CNRS) and member of The Propaganda Monitor's committee of experts, explains to Reporters Without Borders (RSF) how Russia's disinformation techniques have evolved to promote its narratives and discusses the challenges that come with this kind of innovation.

RSF: Have you seen any intensification in Russian propaganda since the Propaganda Monitor's launch a year ago?

Since February 2022, the Kremlin has complemented its war in Ukraine with an information war against Ukraine's supporters. So, propaganda efforts have intensified against the governments and civil societies that support Ukraine the most. This is particularly the case with France and Germany. In the past year, there were the elections in Germany [marked by attempted Russian interference and disinformation campaigns], while France has been a priority target for the Kremlin ever since early 2024, when President Emmanuel Macron did not rule out a French presence on the ground in Ukraine. Russian propaganda makes extensive use of new technology.

How does it use tools like generative AI or chatbots, for example?

Russian propaganda is remarkable in that it relies on a comprehensive approach that does not distinguish between digital and traditional tools. Instead, it seeks to combine them. Generative AI has allowed Russian propagandists to scale up a number of their campaigns by automating certain tasks, especially translation and the generation of content for fake webpages and accounts. In the past year, we have identified a trend towards using AI to distort the results of chatbots and search engines by flooding the Internet with fake sites and content. Generative AI models are trained on data that is sometimes polluted by propaganda, and their post-training leads them to use online data that is also often polluted by propaganda. This automatically results in chatbots reproducing the propaganda on a large scale. We have very recently seen Grok [the chatbot integrated into Elon Musk's social media X] becoming an information threat because many users are turning to it to fact-check information. This is a dangerous trend that can reinforce belief in false information. It is dangerous to use chatbots to verify facts. By its very nature, this should be human work, the work of a journalist.

Are the companies that develop these chatbots aware of the situation?

A study published on 5 September 2025, by researchers at OpenAI, the company that develops and markets ChatGPT, clearly shows that AI designers are perfectly aware of the limitations of their models. In the absence of reliable data, AI models invent facts and, faced with mass amounts of unreliable data, give unreliable information. AI designers also know that it is impossible for them to do otherwise. As a result, they display the same kind of indifference to truth and falsehood that has characterised social media platforms for the past decade.

Published on08.10.2025
  • EUROPE - CENTRAL ASIA
  • Russia
  • Disinformation and propaganda
  • News
  • Digital arena
  • Technology
RSF - Reporters sans frontières published this content on October 08, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on October 09, 2025 at 09:40 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]