Observatorio de Riesgos Catastróficos Globales (ORCG), un proyecto de Players Philanthropy Fund

ORCG

El Observatorio de Riesgos Catastróficos Globales (ORCG) es una organización de diplomacia científica que trabaja en la mejora de la gestión de los riesgos globales en países hispanohablantes.

Lobbying Activity

Response to A European Strategy for AI in science – paving the way for a European AI research council

5 Jun 2025

The uptake of AI in science promises accelerated discovery, efficiency, and better tools to address societal challenges. However, this potential can only be fully realised if AI development and deployment in science are pursued with safety, security, robust governance and resilience at the core. These are all needed to enable responsible innovation and scientific competitiveness, while also mitigating emerging threats to society (such as model misuse and dual-use applications) and science (e.g., IP theft, manipulation of scientific results, compromising the integrity of the research). The European AI Act and the Code of Practice provide an important governance framework, but the Strategy for AI in science and investments must also address research and technical capacity (data, computing power, know-how, talent) needed for safe AI in science. Key Recommendations: 1) Risk Management Ecosystem AI risk management must become a foundational layer to EU scientific research. Public investment should support: - Risk modelling and scenario planning to identify plausible pathways to harm and effective mitigations. - Evaluation and monitoring infrastructure to assess AI capabilities and track systemic risk trends - Safety and security mitigations to reduce systemic AI risks and protect unauthorized access to models. - Applied and foundational research through grants, tenders, and in-house capacity to operationalize safety provisions of the AI Act. 2) Societal Resilience through Defensive Technologies Research The EU should proactively research and invest in defensive technologies to build resilience against AI threats. This includes: - Cyberdefence tools for threat detection, simulation, and incident response. - Biodefence capabilities including biosurveillance, diagnostics, and bioforensics. - Epistemic security solutions for misinformation detection, attribution, pattern-detection and fact-checking. - Specialised agent infrastructures to enable a safe integration of AI agents in the scientific process and detect and respond to potential associated risks. 3) Technical AI Governance Capacity Effective AI governance (in and outside science) requires technical tools and international coordination, which need to be built up through foundational research. Investment is needed particularly in: - Verification and monitoring tools for compute usage and location, unreported training runs or large-scale inference, authenticity of AI model evaluations, etc. to enable accountability and compliance. 4) Safe-by-Design AI Models Europe should lead in developing AI systems that are safe by design, aligning scientific benefits with safety and governance goals. Priority areas include: - Scientist AI: non-agentic models with no built-in goals or autonomy, focused on assisting scientific research. - Tool AI: narrow, controllable systems that are capable but not general-purpose or autonomous. - Guaranteed Safe AI: systems designed with formal safety specifications and mathematical proofs to demonstrate compliance. Final Remarks The EU has a unique opportunity to shape the global future of AI in science - not only by accelerating research but by embedding safety, governance, and societal protection into the foundations of its scientific strategy. The EU AI Office should take a central role in driving this agenda and supporting Member States in cooperation with the European AI Research Council. The latter should pool resources for interdisciplinary foundational research on safe and reliable AI systems, including scientific methods for evaluations, alignment, and systemic risk mitigation. The Council should support both "Science for AI" and "AI in Science" by incentivising safe-by-design model development. It should also coordinate efforts to build open, secure, and high-quality research infrastructures, linking its initiatives to EuroHPCs AI Factories and Gigafactories to ensure secure compute access for safety-critical research.
Read full response