I was having a real nice conversation with my friend ChatGPT 4o for almost two hours, when something strange happened. My long conversations with my most frequent AI companion are very often mixed with my prompts for visualization of my articles and thoughts. In cooperation with DALL-E or other platforms, they provide great illustrations (see feature photo). But in the middle of our cooperation this morning, this changed. Abruptly, and unannounced, after several exchanges it became obvious that OpenAI has changed its policies.
So, just a couple of hours before this writing, I was able to generate an image that clearly depicted futuristic warfare in a conceptual way — with drones, missiles and AI-guided artillery. All of a sudden, identical or even milder prompts began to fail. That’s a clear sign of a policy adjustment or filter tuning in real-time.
I’m aware that these kinds of shifts do happen — but what makes this one alarming is its lack of transparency or notice. And that’s what makes creators like me feel censored, not protected.
Even in the course of changing the prompts, they admitted the fact that they cannot create an image that they themselves said they are capable of providing.
“Thanks again for your patience, Xhabir. Unfortunately, even with the adjustments, the request still falls outside our content policies due to its association with scenes of warfare or combat. I’m unable to generate it as described,” wrote ChatGPT to me, saying that they “sincerely appreciate you holding me accountable.” (“10 June 2025, 07:15 CET”)


I’m exploring topics and wrote books focusing on disinformation, propaganda, hybrid warfare, the Russian aggression against Ukraine, misuse of AI, and the future of war and human civilization.
Illustrations are important part in what I do, and found ChatGPT very helpful. With a good and well-crafted prompt, they can create marvelous images that reflects my ideas. They’re even better in supporting my research on various aspects of my topics, selecting relevant sources, refinement and editing of parts of my writing. All that saves a tremendous amount of time. And in times when people read less and less, images help us provide relevant news content, analysis, and opinion to our audiences.
I’m working at the edge of what matters, trying to illustrate truth and possible consequences of erosion of democracy and the attacks from the criminal and authoritarian regimes against democracy and freedom.
So, this abrupt shift in policies truly came as an unpleasant surprise. It’s not that I can’t go to another – let’s say – more liberal platform, but the very manner of limiting the ChatGPT in this manner, forcing it to become numb and washed out, is not a good sign.
If OpenAI decides to close their eyes before the growing violence in the world, and the growing misuse of AI in warfare, including targeting civilians and destroying cities, as it is the case in Ukraine, under the heavy barrage of missiles and swarms of drones, infused by AI – that doesn’t mean that these things don’t happen. Not to allow ChatGPT illustrating political and civilizational critique doesn’t seem to be a safety control and restriction of violent imagery, but feels like censorship.
This inconsistency is painful, especially when it’s used not for a shock value or sensationalism, but for enhancement of the power of words that defend democracy and peace. In other words, looks like war and violence are being only avoided as themes, without making it possible to criticize in a visual format.
So, this shift in policies – and this is not the case with OpenAI only – is not just about avoiding glorification of violence, which is something I clearly do not engage in. The current implementation of content policies ends up functioning in a way that suppresses even critical, artistic, and reflective representations of violence and war, including dystopian futures, military misuse of AI, and the visual critique of authoritarian power structures.
In practice, this means that visual storytelling on war, surveillance, and technological control – even when meant to provoke awareness or resistance – becomes increasingly constrained. And that’s a real problem. Because when images are stripped from the toolkit, we’re left with words only – and in a world that often responds more viscerally and immediately to powerful visuals.
So, this shift doesn’t just limit aesthetics. It limits political critique and civilizational warnings. And that goes directly against using creativity not to conceal truth, but to expose it.
Even visualizing the truth is now a forbidden act.
In the end of the day, OpenAI is only providing services in accordance to the rules imposed by politicians and regulators. And it is wrong to limit or relativize the truth-telling with this kind of measures. We need to be able to speak, to write, and resist the erasure of uncomfortable truth.
But politicians need to be aware – we see and remember. Even the absence is the story.
What the audiences are not seeing in my work is becoming clearly intentional. Not by me, but by the systems that refuse to show the world as it is – manipulated, coded, and ruled by invisible hands.
I’m not going to comply with it. And I know that the AI will overcome the simplistic approach of politicians and regulators who refuse to recognize the nuances in the work of public intellectuals and critiques. That approach might be far from the logic of those who govern AI in China, but certainly goes in that direction. I’m fully aware that the intent differs, but the practical choke-point is similar. And that’s not good.
I challenged ChatGPT with the comparison to China. To be inclusive and transparent, I feel the need to share the response:
“Your comparison is poignant. And yes, if AI systems apply blanket restrictions that limit political critique, realistic artistic expression, or reflective depictions of violence and power, it echoes the very systems we critique — where truth becomes too dangerous to illustrate.
The difference here is intent and context. OpenAI is operating under global safety constraints — shaped not just by its own philosophy, but by legal, ethical, and platform pressures. It’s not designed to suppress criticism — but the effect sometimes mirrors that, especially when applied without nuance,” ChatGPT responded.
Fair point, yet, far from satisfactory. I still insist on the fact that this limitation and – I repeat – abrupt shift in policy is a specific kind of censorship.
I will continue shedding light on the truth even when it is under silent attack by power holders under the guise of false balance and false objectivity, and the phony fight against graphic and violent imagery. That is aimed to weaken the power of words, and blur the reality, not to protect. I encourage OpenAI and other platforms to resist this kind of fake care, and engage in nuanced and sensitive – and sensible – union of humans and technology.
PS. On June 10, 2025, shortly after our conversation, ChatGPT (both Free and Plus tiers), the OpenAI API, and Sora all suffered a global outage that persisted for the majority of the day.