Disinformation Index Ltd

GDI

The Global Disinformation Index is a non-profit based in London, UK, that uses both machine learning technology and community-driven, human-powered analysis to provide transparent, independent, neutral assessments of online disinformation around the world.

Lobbying Activity

Meeting with Agnieszka Skonieczna (Cabinet of Commissioner Thierry Breton), Filomena Chirico (Cabinet of Commissioner Thierry Breton)

11 May 2022 · GDI news ranking system

Meeting with Daniel Braun (Cabinet of Vice-President Věra Jourová)

24 Mar 2022 · Disinformation

Meeting with Filomena Chirico (Cabinet of Commissioner Thierry Breton)

11 Mar 2022 · DSA and actions against disinformation, in particular in the context of the Ukraine situation

Meeting with Werner Stengg (Cabinet of Executive Vice-President Margrethe Vestager)

8 Mar 2022 · Presentation of their activity, disinformation, Digital Services Act.

Meeting with Věra Jourová (Vice-President)

27 Oct 2021 · Disinformation

Meeting with Filomena Chirico (Cabinet of Commissioner Thierry Breton)

11 May 2021 · Disinformation

Response to Digital Services Act: deepening the Internal Market and clarifying responsibilities for digital services

31 Mar 2021

1. Risk Assessment (Article 26) Article 26 outlines the requirement of very large online platforms (VLOPs) to conduct risk assessments for their business operations within the EU. In addition to the areas outlined, these assessments should include platforms’ exposure to disinformation risks. Knowing and mitigating these risks can help to demonetise disinformation (i.e. via online advertising, e-payment systems, ecommerce platforms, cryptocurrencies, crowd-sourced funding platforms, etc.) The Global Disinformation Index has outlined the importance of understanding disinformation risk as part of its methodology for site-level assessments. GDI’s methodology has already been used to evaluate the media markets of several EU countries such as Estonia, Germany, France and Latvia (Italy and Spain will be in 2021). Risk assessments could include a “know-your-customer” (KYC) approach that platforms (both small and large) must adopt to limit their risk exposure to spreading and funding disinformation. If such standards only apply to VLOPs, we have found - and can also continue to expect - that the peddlers of disinformation will seek other alternatives and options to monetise their content. The DSA could consider instructing platforms to use third party assessments of the disinformation risk of their activities and customers (i.e. sites). This would limit the spread of disinformation and ensure that the determination for what is and is not disinformation is not solely in the hands of tech companies whose business model relies on engaging content, Such risk assessments could be used to downrank and limit the influence of disinformation, preserving the right to freedom of online speech while curtailing the reach and monetisation of harmful content. A market-wide adoption of a gold standard risk rating for news sites would bring accountability to the adtech system and address the problems associated with online advertising, as well as bring independent expertise into the assessment of content online. 2. Researchers and Trusted Flaggers (Articles 19 and 31) In the current DSA proposal, Article 31 stipulates that one of the ways researchers will be vetted is that they must be affiliated with an academic institution, and Article 19 lists the conditions for an authority to be a trusted flagger. While it is vital that researchers be impartial and qualified, these characteristics are not limited to those connected with an academic institution. There are highly experienced and knowledgeable researchers specialised in disinformation and online harms working with non-profit organisations such as GDI and the Institute for Strategic Dialogue. There are also for-profit entities, such as Graphika that study disinformation networks and campaigns. If the DSA were to limit researchers to just those working in academia, it would exclude these valuable and knowledgeable resources, ultimately undermining its own objectives. 3. Codes of Conduct for Online Advertising (Article 36) Article 36 states that the Commission will facilitate and encourage the creation of codes of conduct for online advertising and will ensure that codes of conduct meet the online ad transparency requirements (as specified in Article 24 and Article 30). Currently, many companies have introduced advertising and publishing policies to restrict what ads can run on their networks. But these policies are often inconsistent, not standardised or aligned, and not enforced. For example, GDI found that for COVID-19 disinformation policies, many ad tech companies simply had no policies. As GDI has also documented within the EU, many online ad policies are absent for a range of harmful content, including content that violates key human rights-related issues (such as for gender, sexual orientation, racial and/or religious discrimination). Additionally, internal company advertising and publishing policies do not always align.
Read full response

Meeting with Werner Stengg (Cabinet of Executive Vice-President Margrethe Vestager)

27 Nov 2020 · Digital Services Act, European Democracy Action Plan

Meeting with Daniel Braun (Cabinet of Vice-President Věra Jourová)

23 Mar 2020 · Disinformation