How do authoritarian states such as Russia shape AI development and regulation to fuel their ambitions for innovation and global AI leadership, while embedding their agendas within these systems and regulations?
At the BASEES 2026 Conference in Birmingham, Dr. Florence Ertel, Dr. Anna Ryzhova, and Dr. Maxim Alyukov addressed this question during the panel, titled "Authoritarian AI: How Large Language Models (LLMs) Amplify Russian Propaganda," which is part of the Bayerisches Forschungsinstitut für Digitale Transformation-funded Authoritarian AI project and was chaired by Prof. Dr. Florian Toepfl.
Florence Ertel shared initial findings on how Russia positions itself in the geopolitical AI innovation race and how it employs LLMs as instruments of geopolitical influence in the post-Soviet region. Her research highlights a gap between Russia’s AI ambitions and its actual capabilities, as little is known about the practical deployment and impact of Russian AI technologies beyond Russia itself. She noted that generative AI development is state-driven to reinforce ideological narratives, strengthen Russia’s geopolitical status, and achieve independence from foreign technologies, with strong presidential support for domestic LLMs.
In her presentation, Anna Ryzhova highlighted that AI ethics provides Russia with an opportunity to assert global leadership without achieving technological supremacy. While Russian regulations promote openness, the governance of foreign LLMs is primarily achieved through infrastructural control rather than direct censorship. Ryzhova explained that Russia positions itself as an ethical trendsetter by implementing indirect regulation, particularly in response to common circumvention strategies for accessing foreign LLMs. This approach results in regulation through infrastructure rather than overt restriction.
Maxim Alyukov, presenting research co-authored with Mykola Makhortykh, Alexandr Voronovici, and Maryna Sydorova, examined the vulnerability of AI to disinformation in their paper “LLMs grooming or data voids? LLM-powered chatbots, information gaps, and Kremlin disinformation.” The study found that the greatest risk comes not from foreign manipulation but from the uneven quality of online information. The authors argue that addressing this challenge requires expanding the availability of reliable content on underreported topics to reduce the risk of disinformation via AI-powered tools.
