By Xhabir Deralla
Have you noticed a sudden plunge in the visibility of your personal or institutional profile on social media? The “crisis” lasts for days or weeks—then, just as abruptly, your reach begins to recover, as if nothing had happened.
This is not an accident. Nor is it simply a loss of interest in the topics you share. It may, in fact, be the opposite. You may have drawn attention—not from your audience, but from systems of power operating within the platform itself. In the evolving landscape of information warfare, censorship is no longer the most effective way to silence voices. It is too visible, too controversial, and often counterproductive. Instead, a quieter and more sophisticated method is gaining ground: algorithmic suppression.
Content is not deleted. Accounts are not banned. No formal violations are announced. Yet, visibility collapses. Posts fail to reach their audiences, engagement drops sharply, and entire media outlets can be gradually pushed to the margins of the digital public “square.” This phenomenon reflects a structural vulnerability in how platforms like Facebook, X and others operate, and how these systems can be exploited.
The logic of the algorithm
Modern social media platforms prioritize content through complex algorithms designed to optimize user experience, safety, and engagement. These systems rely heavily on behavioral signals: likes, shares, comments, and – critically – user reports.
At scale, automation is unavoidable. Platforms owned by Meta Platforms process vast amounts of content, and decisions are often triggered by patterns rather than context. A sudden increase in reports or negative engagement can signal potential violations, prompting the system to limit distribution as a precaution.
In theory, this protects users. In practice, it creates an opening for manipulation by coordinated networks, propaganda ecosystems, and authoritarian power structures.
Coordination and the weaponization of algorithms
Organized groups can exploit these mechanisms through coordinated reporting or engagement campaigns. By generating a rapid influx of complaints against specific content or pages, they can simulate harmful behavior, even when none exists.
The system, unable to distinguish between genuine concern and orchestrated action, reacts automatically. The result is not removal, but restriction: reduced reach, lower ranking in feeds, and exclusion from recommendation systems.
This form of “soft moderation” operates below the threshold of visibility. It leaves no clear trace, no public record, and often no explanation to those affected. In effect, platform governance becomes a vector of influence.
From suppression to amplification: different models, same effect
While some platforms enable suppression through automated moderation, others shape the information space through different means – less about restricting visibility, and more about selectively amplifying it.
Since the acquisition of X by Elon Musk, concerns have grown over the platform’s evolving content dynamics. Changes in moderation policies, reinstatement of previously banned accounts, and algorithmic shifts have contributed to an environment where certain narratives – particularly polarizing or unverified ones – can gain disproportionate reach.
In this model, the system does not directly silence opposing voices. Instead, it reshapes visibility by amplifying content that aligns with ideologically driven currents, effectively suppressing facts and progressive viewpoints.
A similar logic can be observed in platforms built with explicit political alignment, such as Truth Social, associated with Donald Trump. Here, the architecture itself encourages narrative homogeneity, reinforcing specific viewpoints while marginalizing dissenting perspectives. These models differ in structure, but converge in effect. They distort visibility—whether by suppressing some voices or amplifying others. The outcome is a de facto suppression of opposing viewpoints.
A hybrid tactic for the digital age
These practices align with broader patterns observed in hybrid warfare, which consist of actions that remain below the threshold of open conflict. These are difficult to attribute, and exploit existing systems rather than attacking them directly.
Algorithmic suppression—and its counterpart, algorithmic amplification—follow a model that is low-cost, scalable, and deniable: requiring coordination rather than advanced technology, applicable across platforms and regions, and indistinguishable from organic user behavior or platform design choices.
It is particularly effective in polarized environments, where motivated groups are ready to mobilize quickly, and where digital platforms serve as primary channels of information. In such contexts, independent media, investigative outlets, and civil society actors become prime targets, not because they are vulnerable, but because they are influential.
The consequences for the public sphere
The impact extends far beyond individual accounts or organizations.
First, it distorts visibility. Coordinated manipulation—or platform-level bias (as seen on platforms like Truth Social)—can reshape trends and steer attention. What appears popular or relevant may, in fact, be artificially amplified or artificially diminished.
Second, it erodes trust. When audiences notice inconsistencies, such as content disappearing from feeds, engagement behaving unpredictably, or extreme narratives gaining sudden traction, they may lose confidence not only in platforms, but in information itself.
Third, it weakens democratic resilience. Access to diverse, reliable sources is essential for informed public debate. When such sources are systematically pushed out of visibility, or drowned out by amplified noise, the information environment becomes more fragile and more susceptible to manipulation.
An unregulated blind spot
Current regulatory approaches remain largely focused on content: what should or should not be allowed online. Yet the real threat increasingly lies in the manipulation of distribution—how content is ranked, surfaced, suppressed, or amplified through algorithmic systems.
State-aligned or politically motivated networks often do not rely on illegal content, nor do they necessarily violate community standards. Instead, they exploit—or embody—the mechanics of distribution, creating a critical blind spot. Thus, human rights organizations themselves may be pushed out of the “traffic,” while radical networks and individuals, through skillful management, not only evade regulations but also achieve incomparably greater visibility than those who expose the erosion of democracy, human rights violations, or war crimes.
Transparency obligations rarely cover distribution dynamics in sufficient detail. Appeals mechanisms are often limited to content removal, not reach reduction or algorithmic amplification. As a result, affected actors and the public in general have little visibility into how information is prioritized or suppressed.
The new reality and the European response
Control over information no longer depends solely on what is published, but on what is seen, and what is made to dominate attention. Algorithmic suppression does not silence by force. Algorithmic amplification does not persuade by argument. They reshape reality by structuring visibility itself. And in an era of hybrid threats, that may be the most powerful tool of all.
These phenomena have contributed to serious debates and initiatives in Europe. The question of digital sovereignty—reducing dependence on platforms dominated by Silicon Valley—has moved out of the margins and closer to the center of political discussions on the Old Continent. European initiatives increasingly explore alternative infrastructures, regulatory frameworks, and public-interest platforms that could ensure greater transparency, accountability, and resilience.
These issues have also been raised in international forums, including the Defending Democracy conferences held in Skopje in 2024 and 2025, organized by CIVIL – Center for Freedom and partners within the Defending Democracy Global Initiative, such as Youth4Media Network, Media Dialogue, New European People’s Forum, Centro Studi Internazionali, and Jean Monnet Association. Participants highlighted that dependence on global platforms is not only a technological or economic issue, but a matter of democracy and security.
Meanwhile: Playing the Game—and Losing It
Meanwhile, finding effective solutions—and a proper response to practices that quietly serve power structures—is no easy task. Algorithmic suppression and amplification have long ceased to be technical issues, yet they remain largely ignored where it matters most.
Instead of resisting, much of the media has chosen accommodation—learning to “appease” the algorithm. Entire companies have emerged as consultants, offering services and coaching on how to adapt content to trends. Titles grow softer, language more agreeable, and content less disruptive. Edges that should cut are carefully blunted. Instead of creating trends, media are increasingly becoming servants to them.
So the game continues. Except it is no longer journalism playing the system. It is the system playing journalism.
And in trying to play the game, they are losing it.
Editorial judgment is outsourced. Responsibility is diluted. Visibility becomes both the currency—and the trap. Worse still, editorial responsibility is surrendered to systems that neither understand truth nor value it.
In doing so, democratic, professional, investigative, and independent actors do not merely adapt—they concede. They yield to structural forces that shape visibility, influence perception, and ultimately define the boundaries of democratic reality.
To forces that do not report reality—but define it.
© Xhabir Deralla, 2026. Licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
