By Pavel Havlíček | The Balkan Forum
One of the most important discussions of today is how to best combat disinformation and misinformation in the information space and which tools to use to limit their spread since this is something that often causes (moral) panic about being manipulated in the quickly changing public space, including the online domain, oversaturated with information.
A significant part of the debate is connected to social media platforms since they nowadays serve as an essential place of social interactions and exchange of views among their users and participants of the public discussion that has largely moved online to domains like Facebook, Twitter, Instagram, YouTube or Tik Tok and other applications.
While these platforms have benefitted from the attention of billions of users, commercialised their online presence and attention paid to online products, including based on the model of advertising, they have overwhelmingly underestimated the security threats, foreign malign operations, often conducted by bots and trolls, and the problem of radicalisation that has spread in their ecosystems, despite being clearly against their own rules of the community.
It was best visible when the European Commission introduced its Code of Practice against Disinformation in December 2018, which was supposed to motivate the platforms to invest in the integrity of their own services and the protection of the end user as well as their rights. However, this policy was realised based on the principle of self-regulation that was supposed to be implemented and at the same time evaluated by the social media actors themselves without the direct involvement of the European regulators.
Unsurprisingly, the results were more than mixed, which pushed the European Commission to amend its approach and enforce a much stricter type of regulation that forced the platforms to implement a new set of rules and declare the results of their activities to the European Commission by law, the so-called Digital Services Act (DSA).
Role of DSA
Based on the previous experience and lack of engagement by the social media platforms after 2018, the DSA enforced several key principles that should make sure that the spread of disinformation and illegal content online comes under stricter control of the EU again.
One of them was transparency which should make sure that the users of social media platforms themselves become more aware of their rights and opportunities, including when reporting illegal and legal but harmful (disinformation) content online. Transparency should also help to open the internal life within the social media platforms in the form of algorithms that determine who is seeing what at each time, without the users often understanding why and how it happens. Finally, the principle of transparency should also make sure that the platforms report on high risks and allow for external audits.
Another of the key issues has been users’ empowerment and ensuring that it is not the platform but European rules and norms that prevail over their own interpretation of rules and regulations. While turning the table and enforcing the EU’s will over the common European digital space, the platforms now need to respect the right of appeal to an independent arbiter and finally also the will of the national courts over their judgements determining if this or that should happen online. What this has meant in practice is, for example, that if Donald Trump was “de-platformed” in July 2024 in the EU’s digital space, he would have the right to appeal to the decision and ask an independent authority and finally also the national and/or EU courts to decide about his social media presence. This rule now applies to all users, be they individual or commercial.
Finally, the EU’s own Code of Practice on Disinformation, originally adopted in December 2018 and several times amended since then, became a golden standard when it comes to approaching disinformation and other forms of problematic content online. This – for instance – means that the platforms are supposed to deprioritise and demonetise problematic content and in fact go against their original revenue model based on the spread of polarising and thus very often highly trending content.
In many of these instances, the new European approach to the so-called Very Large Media Platforms and other online actors is revolutionary and deserves our attention. At the same time, it also shows that the platforms themselves were not motivated and keen on doing too much to tackle disinformation online. Now, this changes and we should make proper use of all these new opportunities at the EU, national, and individual levels.
Pending challenges
At the same time, a challenge remains in persuading the platforms to apply this approach at the global level beyond the EU itself since other countries in the European direct neighbourhood lie outside of the regulatory boundaries. This includes not only the Western Balkan countries but also Eastern Europe where many candidate countries willing to join the European block fall through the common EU regulatory system.
On the contrary, the European Commission that oversees communications and negotiations with the platform representatives should persuade the VLOPs to apply their rules more generally across the globe and this way repeat the experience of the GDPR, the European directive on the protection of personal information, which became a successful example of European normative power in the world.
Finally, the EU’s regulation model remains open and represents an opportunity for the European partners to follow suit and apply a similar type of regulation to their national legislative systems, as visible in the example of the United Kingdom, which decided to make its own digital legislation largely compatible with the one of the EU too.