Centre for Democracy Technology, Europe

CDT Europe

CDT Europe champions policies protecting against discriminatory and exploitative uses of technology, focusing on platform accountability, data protection, and limits on government surveillance.

Lobbying Activity

Response to Digital package – digital omnibus

14 Oct 2025

We welcome the Commissions efforts to support the effective implementation of the AI Act, for example through the creation of the AI Service Desk or the publication of guidelines. At the same time, we firmly oppose any form of simplification that would put the hard-won fundamental rights protections and obligations under the law into jeopardy. Simplification efforts should never lead to deregulation. Experiences with the first omnibus proposal on sustainability have shown how easily proposed amendments presented as simplification can touch upon the core substance of a law and seriously affect the fundamental rights protections anchored therein. Any form of simplification should ultimately serve the purpose of the AI Act, including ensuring a high level of protection of the fundamental rights guaranteed in the Charter. We highlight the need to ensure the effective and consistent application and enforcement of the AI Act that enables individuals to seek redress in case of fundamental rights infringements, especially in the absence of the AI Liability Directive. This includes guidance on the right to an explanation as well as the cooperation between different supervisory authorities and their proper resourcing, while keeping the AI Acts inter-disciplinary enforcement structure and especially the role of fundamental rights authorities. Given that the AI Acts early stage of implementation, we furthermore call upon the Commission to not simplify obligations of providers and deployers in the absence of clear, verifiable and transparent evidence of the laws negative implications on innovation and the compliance burden for companies. While we agree that further guidance is needed on the interplay between the AI Act and other legislation, a detailed analysis of the obligations under different digital legislation as well as sectoral legislation under the New Legislative Framework we carried out, and which we also attach to this call for evidence, has shown that the AI Act extensively caters to pre-existing legislation. In particular, providers have flexibility when deciding on how to comply with several obligations under different legal frameworks (Recital 64 AI Act). Its underlying architecture and approach are therefore simplified by default, with subsisting obligations fulfilling a distinct function that must be preserved. Finally, we reiterate our position that there should be no delay in the application of any obligations under the AI Act, as targeted as any such delays may be. The rationale that stop-the-clock is necessary to cater for delays in the standardisation of high-risk obligations ignores the purpose of standards and the AI Acts own approach. Not only are standards strictly voluntary tools therefore not a necessary precondition for compliance of providers and deployers but the AI Act anticipates the possibility of delays, advancing common specifications issued by the European Commission as the only alternative (Article 41). As already highlighted in our open letter to the Commission, subordinating the application of high-risk obligations to the finalisation of standards would be to reward industry actors responsible for delaying these standards in the first place, and provide them with an incentive to continue to do so. Finally, delaying the application of parts of the AI Act does not provide legal certainty for affected individuals as well as providers and deployers.
Read full response

Response to European Democracy Shield

26 May 2025

Please see attached document
Read full response

Meeting with Sandro Ruotolo (Member of the European Parliament)

14 May 2025 · EU action on spyware

Meeting with Marco Giorello (Head of Unit Communications Networks, Content and Technology) and Bureau Européen des Unions de Consommateurs and

19 Mar 2025 · Code of Conduct on Online Advertising – Workshop 4

Meeting with Michael McGrath (Commissioner) and

13 Mar 2025 · Exchange on range of issues related to Commissioner’s portfolio including simplification, digital, data privacy and technology-related developments

Meeting with Werner Stengg (Cabinet of Executive Vice-President Henna Virkkunen)

6 Mar 2025 · EU digital policy

Meeting with Birgit Sippel (Member of the European Parliament) and Bureau Européen des Unions de Consommateurs and

19 Feb 2025 · Guidelines for banned AI use cases and the definition of AI (Staff-level)

Meeting with Birgit Sippel (Member of the European Parliament)

15 Oct 2024 · Digital rights in the 10th parliamentary term

Meeting with Markéta Gregorová (Member of the European Parliament)

10 Oct 2024 · AI, privacy and accountability

Meeting with Alexandra Geese (Member of the European Parliament)

2 Oct 2024 · Digital policies

Meeting with Alexandra Geese (Member of the European Parliament) and Bureau Européen des Unions de Consommateurs and

1 Oct 2024 · Visionary roundtable: building an EU digital enforcement strategy

Meeting with Paul Tang (Member of the European Parliament, Shadow rapporteur)

10 Jun 2024 · Staff Level: Child Sexual Abuse Regulation

Meeting with Paul Tang (Member of the European Parliament) and European Digital Rights

17 Apr 2024 · Staff Level: Fifth Edition of the Civil Society Roundtable Series

Meeting with Birgit Sippel (Member of the European Parliament)

13 Mar 2024 · EU-US e-evidence & UN Convention on countering the use of information and communications technologies for criminal purposes (COM(2022) 132 final) (Staff-level)

CDT Europe Urges Clearer and More Detailed DSA Transparency Reporting

24 Jan 2024
Message — The organization requests clearer distinctions between illegal content and platform policy violations in the reports. They advocate for more detailed data on government orders and law enforcement activity. They also call for standardized definitions to enable better comparison between platforms.123
Why — These changes would provide higher-quality data for researchers to perform meaningful independent oversight.4
Impact — Users risk having lawful speech removed if platforms prioritize meeting speed-based performance metrics.5

Meeting with Maria-Manuel Leitão-Marques (Member of the European Parliament, Shadow rapporteur) and Access Now Europe

15 Sept 2023 · Political Advertising

CDT Europe urges human-rights focus in platform audit rules

2 Jun 2023
Message — CDT Europe requests the Commission establish baseline standards and clear terminology to prevent platforms from drafting their own audit criteria. They urge for the formal inclusion of civil society and human rights experts throughout the auditing process.123
Why — This secures a formal role for NGOs to provide oversight and participate in the auditing market.4
Impact — Online platforms would lose control over the metrics used to judge their systemic risk management.5

CDT Europe Demands Independent Oversight For Platform Data Access

30 May 2023
Message — The organization calls for an independent intermediary body to oversee data sharing and vetting. They demand a tiered system and safeguards to prevent law enforcement from misusing data.123
Why — These measures ensure researchers and civil society gain transparent, affordable access to platform data.45
Impact — Law enforcement agencies would be barred from exploiting research data for illegitimate surveillance purposes.67

Meeting with Alexandra Geese (Member of the European Parliament) and AWO Belgium BV (trading as AWO)

30 May 2023 · CSO workshop - Impactful advocacy during the Artificial Intelligence Act Trilogues

Meeting with Paul Tang (Member of the European Parliament, Shadow rapporteur)

11 Apr 2023 · Child Sexual Abuse Regulation

Meeting with Věra Jourová (Vice-President) and Transparency International Liaison Office to the European Union and

17 Mar 2023 · Defence of democracy package

Meeting with Javier Zarzalejos (Member of the European Parliament, Rapporteur)

12 Jan 2023 · Meeting with the Centre for Democracy & Technology, to discuss the proposal on preventing and combating child sexual abuse online

Meeting with Paul Tang (Member of the European Parliament, Shadow rapporteur) and Google and

12 Jan 2023 · Closed door stakeholders meeting on Child Sexual Abuse Regulation with MEP Alex Agius Saliba and MEP Helene Fritzon

Meeting with Alexandra Geese (Member of the European Parliament, Shadow rapporteur)

24 Oct 2022 · Political advertising

Meeting with Alexandra Geese (Member of the European Parliament, Shadow rapporteur)

18 Oct 2022 · Political advertising, Digital Services Act implementation

Meeting with Paul Tang (Member of the European Parliament, Shadow rapporteur for opinion) and European Partnership for Democracy and

28 Jun 2022 · Event on Political Advertising Regulation (assistant participated)

Meeting with Werner Stengg (Cabinet of Executive Vice-President Margrethe Vestager)

18 Jan 2022 · DSA

Meeting with Mette Dyrskjot (Cabinet of Executive Vice-President Margrethe Vestager), Werner Stengg (Cabinet of Executive Vice-President Margrethe Vestager)

7 Oct 2021 · Digital Services Act, EU Action Plan on Democracy and Human Rights

Response to Requirements for Artificial Intelligence

3 Aug 2021

The Centre for Democracy & Technology, Europe (CDT) welcomes the EU AI Act and the high priority it aspires to give to protecting fundamental rights. All AI systems should be subject to a human rights impact assessment and subject to regulation proportionate to the risks identified in that assessment. A risk-based approach can be helpful in ensuring proportionate regulation, but in order to appropriately protect human rights, this needs to integrate a rights-based approach. The theory of risk should recognise that risk increases when the likelihood, or the seriousness, of infringement on rights increases. The proposal’s hierarchy of risk at times focuses on the technology and at times on the context. The category of ‘certain AI systems’ (low-risk, Art. 52) includes biometric categorisation and AI systems to prevent and investigate crimes. These AI applications are actually both high-risk, with historic and current examples of rights abuse. The prohibition on social scoring only applies to governments, but private entities are at the same risk of using such systems to infringe human rights whether they are performing an outsourced public service or using such a system in their own services. The proposal rightly classifies biometric surveillance by law enforcement in publicly accessible places as an ‘unacceptable risk’, but the derogations include some of the highest risks to human rights and will swallow the rule. For instance, permitting its use to combat terrorism is wrought with risks because of human rights loop-holes in European and national counter terrorism legislation. Any law enforcement use of biometric surveillance is inherently high-risk and should be prohibited or subject to robust regulation. The proposal has ad hoc references to content moderation. CDT has documented how automated content analysis can be inaccurate and perpetuate discrimination. But there is no specific reference to this danger. In terms of legal clarity, there is a risk of confusion between the due diligence provisions of the draft Digital Services Act and the AI Act. The draft proposal limits avenues for individual redress or access to remedy. Remedies are almost exclusively accorded to vendors of AI and professional users (including governments) and not individual users nor marginalised or at-risk groups. Because the proposed legal basis of the draft proposal is Article 114 TFEU, the governance and enforcement mechanisms are rooted in product safety and market surveillance logic. Given the goal to better protect fundamental rights, Art. 2 TEU should be added as an additional basis. This could allow for a mandate for equality bodies, national human rights institutions and ombudspersons to be integrated into the governance system. Their expertise on human rights impact assessments could better inform risk assessment. The draft should also provide end-users concrete and actionable rights to object to being subject to AI. To ensure the compatibility and enforcement of EU equality legislation with the draft act, an individual or civil society organisation should have an avenue to make claims of discrimination and the burden should shift so the entity using the AI system is required to disprove that discrimination. The draft AI Act also gives disproportionate power to private actors. It is unclear how the proposed standards could be enforceable, in particular with regard to individual or group complaints. The self-assessment and standardisation approach risks absolving public authorities from policy-making (and offloading it to privatised standards instead). The current European standard-setting process is inaccessible to most public interest actors and the harmonisation process threatens to weaken whatever standards are set. This is a further example of why it is important to formally and purposefully bring more human rights and public interest actors into the process of risk-assessment, enforcement and policy-making.
Read full response

Response to Digital Services Act: deepening the Internal Market and clarifying responsibilities for digital services

31 Mar 2021

The Centre for Democracy & Technology Europe (CDTE) welcomes the draft proposal on the Digital Services Act (the Draft) and the opportunity to provide feedback. The proposal includes a number of key elements of rights-protecting online content regulation: the 'Good Samaritan' principle, which provides intermediaries with liability protections for voluntarily taking action against abusive user-generated content, and the prohibition against general monitoring obligations, which has been a cornerstone of the e-Commerce Directive. Calls by CDTE and other human rights organisations for more transparency over algorithms and online advertising are also reflected in the Draft. CDTE is concerned, however, that a number of these rights-protecting elements are inadvertently undermined by other provisions. International and European human rights law is clear that decisions on the legality of speech are the sole purview of the Courts. Yet, the Draft assigns this task to a range of non-judicial actors including private companies and state authorities. The Draft delegates decisions on the legality of speech to platforms in several places, including through their internal complaints handling system (Art. 17 (3)) and the out-of-court dispute settlement process (Art. 18), which does not meet the standard of a tribunal under the EU Charter of Fundamental Rights. Equally, a broad range of non-judicial actors can provide notice that would be sufficient to expose hosts to liability for users’ speech. Notices from any individual user, law enforcement, or other non-judicial actor (including mandatory 'trusted flagger' notifications, Art. 19) will defeat the safe harbor provision for hosting services (Art. 14(3)), creating significant liability risk and leading intermediaries to remove content simply upon receipt of such a notice. This poses a significant potential for abusive notices, and can pose a particular threat to civil society organisations and human rights defenders in areas where the rule of law is already under pressure in Europe. CDTE generally supports robust human rights impact assessments but emphasizes that the vagueness in the Draft’s risk assessment obligations (Art. 26), including the obligation to prevent the ‘dissemination of illegal content’, creates a conflict with the earlier provision prohibiting general monitoring and also with Art. 26(1)(b) on protecting fundamental rights. In order to ensure the effective oversight of algorithm and recommender systems, the independence, expertise and competence of the body carrying out such audits requires further thought. CDTE concurs with the opinion of the European Data Protection Supervisor that recommender systems should by default not be based on profiling within the meaning of Art. 4(4) of the GDPR. Given the risks that micro-targeting and profiling pose in a democracy, CDTE equally agrees that advertising based on pervasive tracking should be phased out. The lack of independence of the proposed governance structures is concerning. The European Commission will effectively be the oversight body (Chair of the Board for European Digital Services), but as the EU’s executive arm, it does not have the requisite independence. Furthermore, it is the drafter of the voluntary codes of conduct concerning online content; this creates a conflict of interest as the codes cannot be truly voluntary when they are penned by the authority legally mandated to enforce the Regulation. In addition, Recital 68 explicitly states that refusing to apply codes of conduct can be taken into account when determining infringement of the Regulation. Art. 21 would create a new obligation for platforms to ‘notify of suspicious criminal offences’, but this is not an appropriate role for a private company. It risks undermining individuals’ privacy in their digital communications, the right of presumption of innocence and EU criminal law’s own protections with regard to notice that an individual is a suspect.
Read full response

Meeting with Filomena Chirico (Cabinet of Commissioner Thierry Breton)

18 Jan 2021 · DSA

Meeting with Thierry Breton (Commissioner) and

9 Dec 2020 · Roundtable with NGOs on DSA and DMA

Meeting with Věra Jourová (Vice-President)

12 Nov 2020 · EDAP, DSA

Meeting with Didier Reynders (Commissioner) and

5 May 2020 · 1.COVID-19 situation - Role of digitial technologies and data in combating the outbreak 2. E-evidence proposals 3. Terrorist Content Online. State of negotiations: possible areas of compromise. 4. AI policy

Meeting with Věra Jourová (Vice-President) and

31 Jan 2020 · Political advertising and European Democracy Action Plan Digital policies and strategies

Meeting with Ylva Johansson (Commissioner) and

29 Jan 2020 · Cooperation in fight against terrorism

Meeting with Daniel Braun (Cabinet of Vice-President Věra Jourová)

14 Jan 2020 · Disinformation, Platforms

Meeting with Ulrik Trolle Smed (Cabinet of Commissioner Julian King)

12 Sept 2019 · Cyber security

Meeting with Eric Peters (Cabinet of Commissioner Mariya Gabriel)

5 Dec 2018 · proposal for a Regulation on preventing the dissemination of terrorist content online

Meeting with Stig Joergen Gren (Cabinet of Vice-President Andrus Ansip)

7 Nov 2018 · Terrorist Content Regulation

Response to Measures to further improve the effectiveness of the fight against illegal content online

29 Mar 2018

The European Commission (EC) has undertaken many initiatives to combat potentially illegal content online. The measures are based on an apparent political consensus that more content must be removed, faster, and that internet intermediaries must filter, detect, prevent, and take down disputed content, both reactively and proactively. Many, including CDT, have argued that the EC’s approach challenges principles of limited liability embedded in the E-Commerce Directive (ECD). This jeopardises the internet as a space for free expression, access to information, entrepreneurship, and innovation. In framing its strategy, the EC should not focus exclusively on the perceived need to prevent and remove possibly illegal content. It should balance this objective with the need to protect and encourage free speech online. The EC mentions the possibility of launching new legislative measures. Any such initiative should be based on comprehensive data about the nature and volume of targeted content. No such data has been provided. The EC simply asserts that action is required action. The Commission must conduct a thorough impact assessment that takes into account the following considerations and questions: The EC should acknowledge that the different categories of content it is concerned with have very different potential consequences, and must be met with proportionate responses, the aims of which must be legitimate. For example, incitement to specific and imminent acts of violence merits swift removal and law enforcement action. Statements that may be offensive but the legality of which is harder to establish should not be met with the same response. Indiscriminate blocking or removal of content on the basis of mere allegations of illegality risks lawful speech being censored. Intermediaries will err on the side of caution and remove content that requires time-consuming analysis and careful legal judgments. To reduce this risk, it is essential for removal of content based on mere allegations of illegality to be limited to cases where the content at stake is manifestly illegal and the removal is accompanied by necessary safeguards against abuse. Multiple types of hosting fall under the scope of the ECD. Different types of intermediaries interact differently with third-party content they host, and many do not host content at all. The standard for appropriate responses to allegations of illegality should depend on the relationship of the intermediary to the disputed content. Blanket obligations on an entire category of intermediary for an entire category of content is certain to lead to over-removal and censorship of lawful speech. Another area requiring thorough research is the effectiveness and proportionality of the growing number of measures intermediaries are already taking, whether to comply with existing rules, to enforce terms of service, or via government-led public-private partnerships, such as the EU Internet Forum and the Code of Conduct on Illegal Hate Speech. Very little data on these initiatives is available, and only in the form of aggregate numbers. The data is not sufficient to determine whether ‘progress is being made’ to ‘tackle illegal content’, because it is impossible to verify whether the content that is flagged and removed is illegal. The EC places great faith in the use of automated detection tools (and has proposed mandating such tools in the DSM Copyright Directive proposal), but the EC has not carried out studies on the effectiveness and precision of these tools. The metric for success cannot just be the quantity of content detected and removed, it must include an independent assessment of legality. Both the EC and companies participating in voluntary schemes are responsible for the impact of these measures on the right to free expression and access to information. Unless and until the EC provides comprehensive data on these issues, it is premature to put forward legislative options at this stage.
Read full response

Meeting with Julie Ruff (Cabinet of Commissioner Julian King)

23 Mar 2018 · Cybersecurity

Meeting with Daniel Braun (Cabinet of Commissioner Věra Jourová)

21 Mar 2018 · Online content

Meeting with Juhan Lepassaar (Cabinet of Vice-President Andrus Ansip), Stig Joergen Gren (Cabinet of Vice-President Andrus Ansip)

6 Mar 2018 · Tackling illegal content online

Meeting with Eric Peters (Cabinet of Commissioner Mariya Gabriel)

2 Mar 2018 · Tackling Illegal Content Online

Meeting with Giorgios Rossides (Cabinet of Commissioner Dimitris Avramopoulos)

23 Feb 2018 · Tackling illegal content online

Meeting with Carl-Christian Buhr (Cabinet of Commissioner Mariya Gabriel)

3 Oct 2017 · Cyber / data

Meeting with Juhan Lepassaar (Cabinet of Vice-President Andrus Ansip), Laure Chapuis-Kombos (Cabinet of Vice-President Andrus Ansip)

7 Jun 2017 · NetzDG

Meeting with Daniel Braun (Cabinet of Commissioner Věra Jourová)

29 May 2017 · The draft German Social Media Law

Meeting with Isabelle Perignon (Cabinet of Commissioner Věra Jourová), Kevin O'Connell (Cabinet of Commissioner Věra Jourová)

21 Sept 2015 · Agreement with the United States on mutual legal assistance

Meeting with Robert Madelin (Director-General Communications Networks, Content and Technology) and Teneo Brussels

21 May 2015 · Digital Single Market

Meeting with Marlene Holzner (Digital Economy)

25 Feb 2015 · Freedom of expression

Meeting with Adrienn Kiraly (Cabinet of Commissioner Tibor Navracsics)

20 Jan 2015 · Copyright

Meeting with Kevin O'Connell (Cabinet of Commissioner Věra Jourová)

9 Jan 2015 · Data protection

Meeting with Hanna Hinrikus (Cabinet of Vice-President Andrus Ansip), Jasmin Battista (Cabinet of Vice-President Andrus Ansip), Kamila Kloc (Cabinet of Vice-President Andrus Ansip)

16 Dec 2014 · Internet governance, copyright, data protection, telecoms single market

Meeting with Hanna Hinrikus (Cabinet of Vice-President Andrus Ansip)

16 Dec 2014 · Internet Governance