Pour Demain Europe

Pour Demain is a non-profit think tank, developing proposals on neglected issues.

Lobbying Activity

Meeting with Werner Stengg (Cabinet of Executive Vice-President Henna Virkkunen)

28 Jan 2026 · GPAI and AI Office

Response to Digital package – digital omnibus

13 Oct 2025

The Digital Omnibus must not affect recently adopted rules for general purpose AI. Pour Demain, an independent think tank working towards the responsible development and deployment of general purpose AI, understands the need for regulatory simplification for SMEs in Europe. However, simplification must not lead to deregulation, as has happened with previous omnibus packages such as CSRD (Corporate Sustainability Reporting Directive) and CS3D (Corporate Sustainability Due Diligence Directive). The EUs digital rulebook has been expertly crafted over more than a decade to fill important regulatory gaps. In some cases, such regulatory gaps remain, for example regarding the recently shelved AI Liability Directive, leaving especially downstream deployers (often European SMEs) significantly exposed to liability risk. Specifically regarding the AI Act, simplification before a law has been enforced carries numerous risks, most importantly reducing legal certainty for businesses, lowering protections for the safety, health and fundamental rights of EU citizens. It also sends the message globally that the EU was wrong in its approach; something that cannot be known before proper enforcement. Particularly regarding Chapter V and the rules for general purpose AI (GPAI) models, the recently finalised Code of Practice is based on already adopted industry practices, and has received wide industry endorsement, with similar legislation already being adopted extra-territorially (recently in California, with Senate Bill 53). GPAI rules are essential for European competitiveness for four main reasons: 1. Pushes obligations up the value-chain, protecting downstreams deployers who are likely to form the foundation of the EUs AI economy 2. Provides legal certainty, allowing businesses to predictably adapt to safety standards 3. Fosters consumer and business trust, boosting AI adoption 4. Supports the EUs goals of developing its own technology ecosystem based on European values, essential for sovereignty As well as being essential for the above reasons, the EUs GPAI rules are simple to enforce for two main reasons: 1. Enforcement is centralised at the level of the AIO, setting an important condition for a truly integrated digital single market 2. The rules are inherently proportional, with negligible obligations applying to most companies, and only kicking-in when large computing thresholds are crossed The recently finalised Code of Practice, detailing Chapter V of the AI Act on GPAI models, has been signed by almost all frontier model providers, and is actively shaping global AI governance standards. The EU must continue its positive momentum, steering transformative technology towards human-centric outcomes.
Read full response

Response to Cloud and AI Development Act

3 Jul 2025

We propose concrete measures to strengthen the Union's cybersecurity, technological sovereignty, and technical AI governance. We endorse Policy Option 3, a regulatory approach, as the most effective path to balance political feasibility with the operational needs of a developing European cloud and AI ecosystem, akin to the European Chips Act. 1. Cybersecurity: To address mandates within the AI Act, we propose strategic investment in hardware-level security to counter threats like supply chain attacks and information extraction. We recommend developing Cluster-Level Trusted Execution Environments (TEEs) that can scale to the thousands of GPUs used for frontier AI, protecting data beyond current single-node limitations. To prevent attacks on physical components, we propose advancing hardware auditing techniques such as functional testing and side-channel analysis. As a defence-in-depth measure against model theft, we recommend creating hardware solutions that limit data upload speeds, making the exfiltration of large AI models impractically slow. Finally, we propose dedicated research to mitigate side-channel attacks by countering information leakage from hardware operations through methods like Faraday cages and noise injection. 2. Technological Sovereignty: We propose initiatives that build genuine EU sovereignty while avoiding the unintended consequence of increased foreign dependency. While we support concepts like "special compute zones," we stress the need for careful implementation and endorse comprehensive projects like EuroStack and "CERN for AI." Furthermore, we recommend strong competition policies, including the potential regulation of cloud services as a public utility, to foster a competitive and fair market. 3. Technical AI Governance: We propose that the EU lead in creating novel technical governance solutions. This includes developing Hardware-Enabled Governance Mechanisms (HEMs) to enhance transparency and enforcement, which could become a valuable source of intellectual property. We also recommend capitalising on the emerging multi-billion euro AI assurance and AI-driven cybersecurity markets by investing in the development of evaluation, auditing, and defensive AI tools. By pursuing these strategic directions, we believe the EU can establish a secure and world-leading AI ecosystem, creating substantial economic opportunities and solidifying its position as an AI Continent. Please see the attached PDF for our full submission and references.
Read full response

Response to Towards a Circular, Regenerative and Competitive Bioeconomy

23 Jun 2025

We welcome the Commissions initiative to develop a new bioeconomy strategy by 2025. This submission demonstrates how responsible innovation frameworks can accelerate EU bioeconomy objectives whilst addressing barriers to commercialisation and competitiveness. As the COVID-19 pandemic demonstrated, sovereign biotechnology capacity is crucial for crisis recovery and economic resilience. NATO notes that synthetic biology will drive the next revolutionary technology cycle, bringing substantial benefits alongside enormous risks of harmful uses. Our key recommendations: (1) Harmonised Biosecurity Standards: Establish EU-wide frameworks, including mandatory nucleic acid synthesis screening, to create a level playing field and position the EU as a global leader in responsible biotechnology. (2) Streamlined Regulation: Implement risk-based assessments and bioeconomy regulatory sandboxes to accelerate innovation whilst maintaining public trust and environmental protection. (3) Strategic Investment: Link responsible innovation compliance with commercialisation funding to accelerate laboratory-to-market transitions and address strategic dependencies in biotech supply chains. By integrating responsible innovation principles into Europes bioeconomy development, the strategy can establish the secure foundation necessary for a thriving, competitive, and sustainable European bioeconomy. We stand ready to contribute further expertise and thank you for this opportunity to provide feedback.
Read full response

Response to Biotech Act

11 Jun 2025

We commend the Commission for the initiative for an EU Biotech Act. This submission offers recommendations for ensuring the Act effectively integrates security and innovation to fortify Europe's strategic autonomy and global leadership in this critical field. Our response focuses on establishing a resilient and responsible biotechnology ecosystem. As demonstrated during the COVID-19 pandemic, sovereign production capacity is a direct determinant of crisis recovery and economic resilience. The stakes are high, as NATO notes the enormous risks of harmful uses of biotechnology alongside its substantial benefits. Attached is Pour Demain's full contribution, co-signed by 14 experts from academia, the synthesis industry, and non-profit organizations. In brief: (1) The Threat Landscape is Evolving: The convergence of AI with biotechnology introduces novel dual-use risks. Current EU regulatory frameworks are too fragmented to adequately address this new reality. (2) Harmonised Governance is Essential: The Biotech Act should establish an EU-wide framework for biorisk governance. A core component should be codifying nucleic acid synthesis screening as a legal requirement for all providers, aligning with calls for harmonization from the industry and creating a level playing field that enhances European biotech competitiveness. (3) Security Measures are Practical and Feasible: Nucleic acid synthesis screening is a low-friction, high-impact solution that integrates seamlessly with scientific workflows. Cost-effective tools are readily available. (4) Strategic Capacity Must Be Secured: The Biotech Act would benefit from dedicated EU funding instruments that bolster domestic production capacity and accelerate the transition from laboratory to market, directly reinforcing Europe's resilience and strategic autonomy. By embedding robust governance and targeted investment in sovereign capacity, the Biotech Act can create the secure foundation necessary for a thriving and competitive European bio-economy. We stand ready to contribute additional expertise or evidence-based findings wherever helpful, and we thank you for the opportunity to submit feedback.
Read full response

Response to A European Strategy for AI in science – paving the way for a European AI research council

5 Jun 2025

Please see the attachment for numbered references. We commend the Commission for this initiative. We recommend: 1. Integrating AI-Biology Dual-Use Risk Assessment The convergence of AI and biotechnology presents unprecedented dual-use risks, requiring attention from European AI research. Recent studies demonstrate AI models approaching expert-level performance on complex biological tasks, including protein design [1] and virology troubleshooting, with some models outperforming 94% of expert virologists on practical laboratory problems [2]. NATO's 2025 Science & Technology Trends report identifies synthetic biology as driving the next technology cycle, with advances raising significant research security and health regulation issues [3]. Recommendation: The European AI Research Council (EAIRC) should establish mandatory CBRN risk assessments for AI research projects at the AI-biology interface, building on existing frameworks in the AI Act's Code of Practice. 2. Harmonizing EU-Wide Biosecurity Standards for AI-Enabled Research Current EU regulatory frameworks remain fragmented and inadequately address novel risks from AI-accelerated biological research. The WHO has identified critical gaps in governance of emerging bio-risks [4], while European security experts have ranked bioterrorism among the top ten security priorities [5]. Recent assessments by the OECD highlight synthetic biology as requiring urgent biosecurity and biosafety policy development [6], with the EU TERROR Joint Action identifying significant dual-use risks from AI-enhanced synthetic biology advances [7]. AI tools can now design novel biological materials and entire genomes [8], necessitating updated oversight mechanisms. Recommendation: Integrate nucleic acid synthesis screening requirements into AI research funding conditions under Horizon Europe and FP10. Research projects utilising AI for biological applications should procure DNA/RNA synthesis only from providers who conduct screening according to established industry standards for synthesis and undergo regular third-party conformity assessments [9]. Laboratories with benchtop synthesizers must implement equivalent screening procedures to address the growing in-house synthesis capabilities that bypass commercial oversight. 3. Updating EU Dual-Use Research Guidelines for AI-Biology Convergence EAIRC should inform updates to EU dual-use research guidelines to address AI-enabled biological research. Current EU dual-use research guidance from 2019 [10] predates rapid advances in AI-biology convergence and lacks specific provisions for AI-enabled biological research risks. The scientific community has begun self-organizing around responsible AI development in protein design, with over 175 leading researchers pledging to obtain DNA synthesis only from providers that screen orders [11]. Recommendation: EAIRC should develop evidence-based recommendations to update EU dual-use research of concern guidelines, specifically addressing AI-biology research scenarios, and establish clear criteria for identifying and managing dual-use risks in AI-enabled biological research projects. 4. Assessing negative externalities and market failures Not all innovation can be driven by IP-based market mechanisms. Furthermore, negative externalities, such as premature labour automation [12], risk workforce displacement [13] without guaranteed economic growth [14]. Well-targeted pull financing, proven in vaccine development and renewable energy [15], could address both issues, for example, through supporting underfunded generic drug repurposing, which lack patent incentives despite public health benefits [16]. Recommendation: when evaluating initiatives, EAIRC should also assess socioeconomic externalities and prioritize interventions that correct market failures while ensuring socially beneficial innovation.
Read full response

Response to Apply AI Strategy

4 Jun 2025

References in attachment. Pour Demain recommends the adoption of productivity-increasing, European AI, that protects the health, safety and fundamental rights of citizens, whilst promoting tech sovereignty. Firstly, AI policy should support work augmentation over automation. Hasty automation can go wrong: a year after Swedish fintech company Klarna replaced 700 staff with AI, it began rehiring due to a drop in the quality of service [1]. Whilst such strategic mishaps often occur behind closed doors, US self-driving operator Cruise caught global headlines when automation led to a surge in car accidents and ultimately a suspension of the companys operating licence [2]. Overall, the evidence of benefits from AI adoption is mixed: most companies that disclose the financial benefits of adopting AI, report cost savings of less than 10%, and revenue gains of less than 5% [3]. Furthermore, outsourced workers, and their respective economies, lose income as their jobs get automated [4]. An undesirable scenario would be for EU companies to replace near-sourced labour with non-EU automated solutions: this not only would exacerbate intra-EU income inequality, but could also lead to negative growth effects for the EU as a whole. Moreover, automation can reduce overall output even in a closed economy when key assumptionsperfectly competitive markets or workers accepting arbitrarily low wagesfail to hold [5], [6]. Consequently, to avoid the risk of low-productivity automation, work augmentation should be preferred over automation [7]. For especially promising applications, like AI-generated coding, direct labour market disruption can be avoided by encouraging adoption in startups. For example, for a quarter of startups incubated by the US accelerator Y Combinator, 95% of computer code is written by AI [8]. Additionally, support for AI adoption can complement policies that drive demand for innovation. For example, AI-assisted repurposing of generic drugs can cost-effectively reap major social benefits. However, because these discoveries cannot be patented, government funding may be necessary to create financial incentives [9]. Secondly, given the black-box nature of general purpose AI [8], [10], trustworthiness remains a major obstacle for adoption. For example, sleeper agent models, easily mis-used by malicious actors to pursue harmful objectives, could create major vulnerabilities if embedded in EU critical infrastructure. Such risks could be introduced through hasty adoption of open-source solutions developed without compliance with EU law, or even by reliance on proprietary models of companies based outside the EU. Identifying or correcting such sleeper agent models is an unresolved technical problem [11]. However, these and other safety issues could be mitigated through effective enforcement of EU legislation, such as transparency reporting, external assessment and incident reporting requirements in the EU AI Act and the related GPAI Code of Practice [12]. Thirdly, in addition to productivity and safety concerns, foreign private sector calls to subsidise AI-related investments [13] may also not fully recognize the EU interests in tech sovereignty. Many elements for comprehensive EU digital infrastructure [14] are already covered in the EU Continent Action plan [15]. Notably, Gigafactories would allow pooling resources towards moonshot AI projects in diverse areas. Proposals like Digital Sovereignty Fund and Buy European Tech Clause [16] can further address the needs for capital and demand for EU-based AI initiatives. To conclude, current proposals already outline strategies for fostering the computing power, capital, and demand necessary for AI adoption. When targeted towards productivity-increasing European AI and supported by the legal certainty of an adequate regulatory framework, Apply AI strategy can successfully promote the adoption of competitive and resilient AI solutions [17].
Read full response

Response to Communication on the EU Stockpiling Strategy

1 May 2025

We commend the Commission for developing a new EU stockpiling strategy for crisis preparedness. With Blueprint Biosecurity, we have identified considerations and recommendations for this strategy (described in the attached document). Our response focuses on the importance of, and opportunities for, pathogen-agnostic countermeasures against respiratory infections. In brief: (1) Respiratory infectious diseases are a significant threat to European health and security. (2) Stockpiling key pathogen-agnostic countermeasures (PPE and decontamination tools) would enhance EU preparedness. (3) A coordinated EU approach to stockpiling would provide greater resilience and preparedness. We stand ready to contribute additional expertise or evidence-based findings wherever helpful, and we thank you for the opportunity to submit feedback.
Read full response

Response to EU Strategy on medical countermeasures

1 May 2025

We commend the Commission for developing a comprehensive MCM strategy for public health threats. With Blueprint Biosecurity and RAND Europe, we have identified considerations and recommendations for this strategy (described in the attached document). Our response focuses on the importance of, and opportunities for, pathogen-agnostic countermeasures against respiratory infections. In brief: - Respiratory infectious diseases are a significant threat to European health and security. - Key pathogen-agnostic countermeasures include PPE (like elastomeric half-mask respirators) and decontamination tools (like far-UVC, glycol vapors, and portable air cleaners). - Europe can act to stockpile or adopt these countermeasures in preparedness times, or improve its ability to produce and deploy them during crisis times. - Europe can reduce reliance on non-EU sources by diversifying supply chains, strengthening EU-based manufacturing, and supporting R&D and public-private partnerships for medical countermeasures. We stand ready to contribute additional expertise or evidence-based findings wherever helpful, and we thank you for the opportunity to submit feedback.
Read full response

Response to EU rules on medical devices and in vitro diagnostics - targeted evaluation

19 Mar 2025

We welcome the Commissions evaluation of Regulations (EU) 2017/745 and 2017/746 and recommend integrating biosecurity safeguards and security by design principles for nucleic acid synthesis (NAS) technologies, particularly DNA synthesizers and benchtop devices. Addressing Biosecurity Risks: Advances in biotechnology have made nucleic acid synthesis tools widely accessible. Benchtop DNA synthesizers now allow gene-length sequences to be printed with minimal oversight. Some sequences can be misused to create pathogens or toxins, posing serious dual-use and security risks. Current MDR/IVDR rules focus on product safety but do not prevent misuse of synthesis devices. We recommend applying security by design standards to all devices capable of DNA/RNA synthesis, ensuring that safeguards are built into both hardware and software. Recommended Measures: 1. Mandatory sequence screening Devices must integrate software that blocks synthesis of harmful DNA (sequences of concern) unless authorized a security by design feature. 2. Customer and order verification (Know-Your-Customer / Know-Your-Order) Access to synthesis functions re. sequences-of-concern must be limited to verified users. Suspicious orders should trigger reporting obligations. 3. CE marking to include biosecurity certification Devices capable of gene-length synthesis should undergo risk-based classification and third-party verification of biosecurity compliance as part of MDR conformity assessments. 4. No harmful DNA printability Devices must not permit printing of dangerous sequences without prior approval from competent authorities a key biosafety-by-design safeguard. We thank the Commission for the opportunity to provide input and remain available to support evidence-based regulation in this critical area.
Read full response

Response to European Internal Security Strategy

13 Mar 2025

We commend the Commission for this initiative to develop a European Internal Security Strategy. We recommend: 1. A new CBRN Action Plan: Addressing Biosecurity Vulnerabilities We propose a new CBRN action plan to address increasingly dual-use synthetic biology advances. Technological progress in areas like synthetic biology has enabled major medical breakthroughsincluding new therapies and diagnostics. At the same time, these innovations present security challenges due to the dual-use nature of biological research. For example, the WHO noted that a skilled lab technician or undergraduates working with viruses in a simple lab could reassemble variola virus (smallpox) if they had access to the correct DNA sequences. AI can perform sophisticated experiments and be used to design novel biological materials. Recent studies show that AI can outperform expert virologists on complex virology assignments. In light of these challenges, geopolitical developments amplify the necessity of a European CBRN action plan. Crucially, while AI model developers have accelerated the capabilities of advanced AI models, the EU remains the only jurisdiction to adopt a comprehensive AI regulation. To protect European citizens from harms spilling over from other unregulated jurisdictions, it is pertinent that the Commission addresses CBRN risks when developing its Internal Security Strategy. We recommend: - Establishing a permanent expert group within the EU Commission to monitor and address emerging biosecurity challenges - Including synthesis screening in the CBRN action plan by developing EU-wide compliance standards for Know-Your-Customer (KYC) and Know-Your-Order (KYO) regarding potentially dangerous genetic sequences - Ensuring mandatory external assessments of CBRN capabilities for General Purpose AI and specialized purpose models with systemic risk 2. EU-wide "Know-Your-Order" Framework for Biological Materials We recommend adopting an EU-wide KYO framework for biological materials. Despite broad awareness of bioterrorism risks, there is currently no consistent EU-wide requirement for verifying orders of genetic materials or sensitive lab equipment. Experts from the Community for European Research and Innovation for Security have identified the possibility of bioterrorism attacks among the top ten security priorities. Unlike high-risk chemicals that require strict verification, potentially dangerous biological materials and lab equipment can be ordered online without proper controls. While complete DNA sequences of select agents (such as anthrax) are regulated, fragments of these same sequences escape oversight. AI can create synthetic versions of dangerous biological agents that evade current screening, emphasizing the urgency of measures to mitigate CBRN risks effectively. We recommend: - Developing mandatory KYO requirements for all providers of biological synthesis services operating in the EU - Developing standards for KYC verification - Establishing reporting procedures for suspicious orders with Europol involvement Finally, there is mutual willingness to tackle CBRN security risks. The synthetic biology industry has demonstrated its commitment to responsible innovation. Over 40 companies are part of the International Gene Synthesis Consortium, applying uniform screening protocols for both customers and DNA sequence orders. Recent EU initiatives such as the Niinistö reports emphasis on robust dual-use research and risk assessments underscore the importance of biosecurity. In synergy, the Internal Security Strategy can benefit from existing systemic risk identification and mitigation measures listed in the Code of Practice on General-Purpose AI, which already includes CBRN risks in AI model development. A new CBRN Action Plan should leverage these frameworks to enhance security while fostering responsible innovation. We stand ready to contribute input on evidence-based policies in this critical area of European security.
Read full response

Response to Establishment of the scientific panel of independent experts under the AI Act – implementing regulation

15 Nov 2024

We at Pour Demain appreciate the opportunity to provide feedback on the draft implementing regulation and commend the Commissions efforts in facilitating stakeholder input. To enhance the effectiveness of the Scientific Panel (SP), we suggest removing geographical restrictions on expert representation to prioritize specialized expertise over national quotas. Additionally, we recommend that Qualified Alerts be issued through a simple majority to ensure timely communication of risks to the AI Office. Expanding the scope of information sharing within the SP, particularly allowing the rapporteur to share insights with the panel, would further strengthen its ability to issue meaningful alerts. Lastly, we propose a clearer role for the SP in monitoring compliance with the General-Purpose AI Codes of Practice, leveraging its expertise to support the AI Office in identifying non-compliance and updating practices to address emerging risks effectively. We expand on these points in our attached submission.
Read full response