This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.


By the Media, Entertainment & Sport group of Bird & Bird

| 4 minutes read

Assessment of the Code of Practice on Disinformation

On the 10th September 2020, the EU Commission published its assessment on how the Code of Practice on Disinformation has been implemented. Maisie Briggs reports.

On the 10th September 2020, the EU Commission published its assessment on how the Code of Practice on Disinformation has been implemented. The Code is a self-regulatory initiative signed by Facebook, Google, Twitter, Mozilla, and members of the advertising industry in October 2018, with Microsoft and TikTok subscribing more recently. You can read more about the Code here. It sets out a wide range of commitments the signatories agree to, with the aim of taking a collective approach in preventing the spread of online disinformation. Disinformation has come under much regulatory scrutiny in the past couple of years and the (voluntary) Code was BigTech’s attempt to stave off compulsory legislative measures. Despite this, fake news and conspiracy theories have flooded social media since the beginning of the COVID-19 pandemic, leading to the WHO director-general’s claim that: ‘we’re not just fighting a pandemic; we’re fighting an infodemic’.

As we continue to face an increase of unverified information spreading online, the assessment highlights that whilst the Code is a valuable instrument for platforms, its self-regulatory nature falls short of the hard-line approach needed to promote greater protection for users. 12 months on from the Code’s implementation, this article considers the question: what further steps are necessary to ensure platforms and advertisers tackle the problem of disinformation effectively?

Monitoring the Code of Practice

The Commission assessed the effectiveness of the Code by monitoring how well signatories had implemented each of the commitments they had agreed to. Broadly, these were to:

  • Reduce advertising opportunities and economic incentives for actors that disseminate disinformation online;
  • Enhance transparency of political advertising;
  • Take action against, and disclose information about, the use of manipulative techniques on platforms’ services designed to artificially boost the dissemination of information online and enable false narratives to become viral;
  • Set up features that give prominence to trustworthy information, so that users can critically assess content they access online; and
  • Engage in collaborative activities with fact-checkers and the research community.

Additionally, the Commission issued a Joint Communication on the 10 June, which focussed on the immediate response platforms had made to coronavirus. As a result, despite technically being outside the timeframe for the assessment of the Code, the Commission considered the steps these platforms had taken to tackle health related disinformation as well.

Outcome of the assessment

The Code provides a valuable framework which was previously lacking, setting out clear objectives for the policies platforms put in place. Crucially, it has held platforms and the advertising sector to account by putting them under public scrutiny. Given the increasing spread of fake news about COVID-19, it is more important than ever that platforms consistently fact-check posts and remove content shown to be false, misleading and potentially harmful. This marks a big step forward in regulating an increasingly digital world.

However, the assessment highlighted that there was a need for more clarity. This is not completely surprising – arguably the writing has been on the wall since not long after the Code was introduced when the Code’s Sounding Board, a multi-stakeholder forum, opined that “there is no common approach, no clear and meaningful commitments, and the KPIs and objectives are not measurable”. Indeed, amongst other issues, the assessment concluded that there are a lack of KPIs to assess how effective platform’s policies are. There are also no commonly shared definitions that all signatories can adopt. This resulted in an inconsistent and incomplete application of the Code. At a time when preventing the ‘infodemic’ is paramount, the Code’s shortcomings highlight the need for a Europe-wide approach in tackling disinformation – measures are only effective if all platforms comply. As the assessment notes, the voluntary nature of the Code has resulted in an inherent ‘regulatory asymmetry’ between its signatories and non-signatories. As a result, there is a limit to how effective the Code can be, as malicious actors can just move to non-signatories’ platforms to propagate their disinformation.

The end of self-regulation?

The assessment has provided food for thought about how Europe needs to take a more assertive approach in tackling disinformation, particularly as people look for answers online in response to the uncertainty of the COVID-19 pandemic. Facebook and Instagram have directed more than 2 billion people to resources from health authorities, emphasising how crucial it is that social media sites take their responsibility in this issue seriously. In answer to the assessment being published, Věra Jourová, Vice President for Values and Transparency, said: ‘the time has come to go beyond self-regulatory measures’ – a sentiment that was called for by many even in a pre-pandemic world.

The shift away from self-regulation ties in with the development of the UK’s regulatory framework to tackle online harms. Companies falling under the scope of the Online Harms Bill, making its way (slowly) through Parliament currently, will have a legal duty to comply with it. Other measures, such as the ICO’s Age Appropriate Design Code, have already come into force in the UK – all of which aim to address wide-ranging concerns about dangers posed by misuse of online platforms. The proposals contained within the Online Harms Bill are intended to provide a more uniform approach to protecting users online, although it remains to be seen whether this will in fact be achieved as and when the legislation comes into force. You can read more about the Online Harms Bill here.

To conclude, the Code has helped progress the conversation between platforms and authorities about the problem of disinformation. Following this assessment, the Commission has said it will deliver a more comprehensive approach by the end of the year in the form of a European Democracy Action plan and a Digital Services Act package, underlining that an EU-wide approach has been deemed to be the most effective way to tackle the issue.


social media, disinformation, eu commission, europe, online disinformation, online harms, online safety, eu