Russia, AI and the Future of Disinformation Warfare

pdf
Read Full Report(PDF 763KB)
Man pressing AI on series of options

NicoElNino / Alamy Stock Photo


This paper explores how Russian state-affiliated and state-aligned actors are discussing, conceptualising and framing AI within their online communications.

As generative AI technologies rapidly evolve, their implications for global information security are becoming more acute. This paper explores how Russian state-affiliated and state-aligned actors are discussing, conceptualising and framing AI within their online communications. Drawing on original analysis of communications from Russian-linked online channels, the paper investigates how actors in the Russian influence ecosystem perceive the role of AI in information warfare and what their narratives reveal about evolving threat trajectories.

The report finds that a diverse range of Russian actors are actively engaged in conversations about AI. These actors are not only discussing the use of AI tools to automate and amplify content, but also exploring the role of AI as a narrative device, boasting of its effectiveness, warning of its dangers and framing it as both a strategic asset and a potential threat.

The analysis reveals a growing focus on AI as both an opportunity and a threat among various Russian actors, from those affiliated with groups like Wagner, to pro-Russian hacktivist collectives and online influencers. AI is often portrayed as a powerful tool for information manipulation, capable of generating persuasive content, amplifying messaging and overwhelming adversaries with sheer volume. At the same time, many actors express significant anxiety about Western dominance over AI development, suggesting that these technologies could be used to subvert Russian public opinion, erode autonomy and destabilise the domestic information environment. Concerns about surveillance, deepfakes (digitally altered videos or images aiming to misrepresent a person as doing or saying something they did not say or do in the original version of the image or video) and algorithmic bias feature prominently in this discourse.

The observed conversations are not confined to abstract speculation. The paper documents how state-affiliated and state-aligned actors are actively debating the implications of AI, sharing practical knowledge, critiquing disinformation practices and recruiting individuals with relevant technical skills. These insights point to an evolving culture of adaptation within Russian influence networks, where AI is increasingly seen as a central component of future-facing information operations.

While the paper does not assess the inner workings of senior intelligence planning, it offers a unique actor-level perspective on how AI is entering the strategic imagination of Russian influence networks. These insights highlight the importance of not only tracking how AI might be operationalised in future disinformation efforts, but also understanding the ways in which it is already shaping how these actors think, communicate and position themselves within digital ecosystems.

Project sponsor

  • European Media and Information Fund

    European Media and Information Fund

    The sole responsibility for any content supported by the European Media and Information Fund lies with the author(s) and it may not necessarily reflect the positions of the EMIF and the Fund Partners, the Calouste Gulbenkian Foundation and the European University Institute.

    European Media and Information Fund

WRITTEN BY

Claudia Wallner

Research Fellow

View profile

Dr Simon Copeland

RUSI Associate Fellow

View profile

Dr Antonio Giustozzi

Senior Research Fellow

Terrorism and Conflict

View profile


Footnotes


Explore our related content