From Death Lament to Digital Seduction: Lullabies for a Post-Truth World — Russia’s AI Propaganda Bullets

How artificial intelligence is being used to soften culture, simulate authority, and prepare minds for influence — and why that matters for democracy

Dec 30, 2025 | HYBRID THREATS, AI & TECHNOLOGY, ANALYSIS

By Jabir Deralla
and the CIVIL Hybrid Threats Monitoring Team

The most dangerous propaganda circulating in Europe today does not look political. It looks beautiful. It arrives as short videos of glowing faces in folkloric dress. As melancholic folk songs stripped of their stories. As children’s melodies floating through digital landscapes. As familiar political voices saying things they never said.

Much of this content is AI-generated or AI-assisted. And much of it originates from, or is amplified by, Russian influence ecosystems. These artifacts rarely contain direct disinformation. Instead, they perform a more strategic function: they reshape emotional orientation. They soften audiences, normalize unreality, aestheticize memory, and lower cognitive defenses — preparing psychological conditions in which later propaganda becomes easier to accept.

This is not narrative delivery. It is pre-narrative conditioning. This article examines how artificial intelligence is being used not to spread narratives, but to prepare minds for them. This is not narrative delivery. It is pre-narrative conditioning. This article examines how artificial intelligence is being used not to spread narratives, but to prepare minds for them. It builds on and continues the earlier article, When Beauty Becomes a Weapon: AI Aesthetics in Russia’s Influence Operations (Civil.Today, 23 December 2025).

The aesthetic entry point: Culture without memory and without moral judgement

The emotional power of these videos begins with a deliberate cultural choice.

One widely circulated clip opens with the unmistakable first verses of Чёрный ворон (Black Raven), one of the most recognizable Russian folk laments, likely originated in the early 19th century during conflicts in the Caucasus. The song tells the story of a wounded soldier lying on the battlefield, calmly addressing the raven circling above him — a symbol of death, inevitability, and the fragile line between life and surrender. For generations, Black Raven has carried the weight of loss, war, and fatalistic dignity. Its emotional authority is deep, collective, and immediate.

In the video, however, this authority is not preserved — it is extracted, stripped of historical context and moral judgement.

After several authentic verses, the song dissolves into a softened, modernized vocal line stripped of narrative and historical specificity. The soldier disappears. The battlefield vanishes. What remains is atmosphere: warmth, the soul, inner certainty, emotional reassurance. A death lament becomes a comfort loop. Tragedy is aestheticized. Memory is emptied and repackaged.

A similar mechanism operates in other AI-generated clips circulating on social platforms. In one such video, the audio draws on the melody of Маленькая ёлочка (Little Christmas Tree), also known as Маленькой ёлочке холодно зимой (The little Christmas tree is cold in winter), a widely known Russian children’s song from the 1930s associated with early childhood, warmth, and innocence. Unlike Black Raven, this song carries no association with war or death. Its emotional charge is different — safety, nostalgia, and pre-political comfort.

Yet the function is the same.

Whether invoking a wartime lament or a children’s holiday song, these clips activate culturally embedded emotional memory while stripping it of context, history, and responsibility. AI-driven visuals — idealized faces, translucent skin, folkloric garments polished into perfection — align seamlessly with softened, modernized audio. Whether fully generated or heavily AI-assisted, the result is a synthetic intimacy: culturally resonant but historically weightless.

The viewer is not asked to remember, reflect, or question. They are invited to feel — calmly, warmly, safely.

This is where AI aesthetics become propaganda bullets.

By invoking Black Raven, a clip taps into a reservoir of collective wartime memory while simultaneously neutralizing it. Suffering is acknowledged only long enough to lend authenticity; then it is dissolved into beauty and reassurance. This is not remembrance — it is narrative laundering.

By invoking Little Christmas Tree, another clip draws on childhood innocence and emotional safety, lowering defenses even further.

Different emotional keys. The same strategic effect.

These aesthetic products do not replace propaganda. They precede it. They prepare the emotional terrain, lower defensive reflexes, and reintroduce cultural symbols without responsibility or context.

When tradition is made beautiful but hollow, it becomes portable.
And once it becomes portable, it becomes usable — not as culture, but as influence.

From Emotional Conditioning to Synthetic Authority

But aesthetic seduction is only the first layer. Once emotional resistance is lowered, a second and more dangerous operation becomes possible: the construction of synthetic trust. Beyond aestheticized culture and emotional softening, AI is now being used to manufacture something far more valuable than beauty or mood: authority itself.

A detailed October 2025 investigation by BBC Monitoring documents how Russia-aligned bot networks have begun using artificial intelligence not only to generate content, but to fabricate credibility — by imitating trusted media, respected institutions, and recognizable public figures. The aim is no longer simply to misinform. It is to erode the very conditions under which truth is recognized.

One such operation, known as Matryoshka, deployed at least 336 AI-assisted or AI-generated videos in the months leading up to Moldova’s 2025 parliamentary elections. These videos targeted different social groups with tailored narratives attacking the country’s pro-EU government. Crucially, they were packaged to look like legitimate journalism and public communication — mimicking the style and branding of outlets such as Euronews, Le Point, La Tribune, and even France’s state disinformation watchdog, Viginum.

In several cases, the manipulation went further.

AI was used to alter real video footage of academics and experts, inserting false statements into their voices and appearances. A French professor was made to appear to denounce Moldova’s president. American and European researchers were portrayed as calling for lifting sanctions on Russia or endorsing Kremlin territorial claims.

None of them had said any of this. Yet the videos were credible enough to circulate, provoke reaction, and require debunking. This is not classic propaganda. There is no rallying slogan, no ideological sermon, no direct call to believe. What is being targeted instead is something more fragile and more fundamental: the infrastructure of trust.

By flooding the information space with plausible but fabricated authority, these operations blur the line between authentic and artificial, between real expertise and synthetic performance. The effect is not necessarily persuasion, but confusion. Not conviction, but fatigue. Not belief, but a quiet loosening of epistemic certainty — the sense that nothing can be fully verified, and therefore everything is negotiable.

Once that condition is established, emotional comfort becomes more persuasive than factual accuracy. Familiarity becomes more powerful than evidence. Aesthetic reassurance becomes more attractive than truth.

This is why these AI operations matter even when they fail, even when they are exposed, even when they are debunked. Their success is not measured in clicks or conversions, but in corrosion. They do not tell people what to think. They reshape what people feel able to know.

Culture is softened into mood. Authority is flattened into simulation. Truth is thinned into atmosphere.

And once truth becomes atmospheric rather than structural, influence no longer needs to argue. It only needs to flow.

 


This analysis was prepared by Jabir Deralla (pen name of Xhabir M. Deralla) in cooperation with the CIVIL Hybrid Threats Monitoring Team, with analytical support from AI tools (OpenAI / ChatGPT) used for research assistance and language refinement.


Images:
– feature image: author’s compilation of AI-generated / AI-assisted content from social platforms, used for analysis;
– portrait: “Cyborg Portrait,” by Xhabir M. Deralla — created with AI tools (ChatGPT & DALL·E).


This analysis builds on and continues the earlier article: When Beauty Becomes a Weapon: AI Aesthetics in Russia’s Influence Operations (Civil.Today, 23 December 2025).


When Beauty Becomes a Weapon: AI Aesthetics in Russia’s Influence Operations

Truth Matters. Democracy Depends on It