Trinity College Dublin, the University of Dublin

TCD

We are a globally connected community of learning, research, and scholarship, inspiring generations to meet the challenges of the future.

Lobbying Activity

Response to Digital Fairness Act

24 Oct 2025

The AI Accountability Lab (AIAL) welcomes the Commission's initiative for the Digital Fairness Act. We envision this new regulation to provide a high-level of consumer protection as enshrined in the Charter of Fundamental Rights. The commends submitted by the AIAL represent key concerns related to the nature of contracts (transparency, fairness), the necessity to address consumer rights in the modern digital-first world, the new forms of consumer risks and unfairness created through AI services, and mechanisms for supporting consumers in managing their digital privacy and terms through practical solutions and standards.
Read full response

Response to Digital package – digital omnibus

14 Oct 2025

Regarding the scope and method of the omnibus proposal, AIAL has concerns regarding the justification of removing 'red tape' without a corresponding assessment of impacts to fundamental rights and freedoms. While the need for competitiveness is well understood, this lapse in typical rule making procedure is a severe risk to citizens as we increasingly see rights, especially those regarding digital services, healthcare, workers, and environment, are under threat owing to rapid technological proliferation. In this, the Commission's recent measures to increase the adoption of AI also risks creating new forms of surveillance, control, and violations at scale. This is the very reason the Commission initially proposed and has been able to successfully adopt the AI Act, which has placed a specific emphasis on AI systems considered as high-risk and established the world's first legal framework to address their harms to fundamental rights and freedoms. To now ignore these risks and their rapidly realising harms in the name of simplification would be a mistake and a step back for democratic processes that ensure citizen's interests. Further, the Commission also has proposed removal of key documentation from regulatory measures regarding GDPR, as a means to simplify the regulatory burden. We show with empirical evidence how this creates a severe risk to the organisations' ability to address rights of the individual and also greatly lessens the degree of accountability present in the EU. This is highly likely to become an opportunity to further reduce the legal protections and to justify further dilution under the name of economic competitiveness. We therefore urge the Commission to avoid any reopening or dilution of the GDPR, and to instead focus on improved enforcement and simplification of compliance measures for organisations without changes to the GDPR. In continuation of this, we also urge the Commission to not revoke fundamental protections regarding privacy afforded by the ePrivacy Directive. We welcome the resolve to address online consent popups and issues regarding cookies, but are concerned that the Commission is not addressing the underlying surveillance infrastructure that violates the privacy of citizens routinely and at massive scale. We therefore ask that the Commission directly address this core issue at source instead of tackling the symptoms (cookie banners). For this, we provide specific recommendations for how the Commission can utilise its proposed adoption of Privacy Enhancing Technologies (PETs) in a way that works to uplift rights and freedoms of the individual without creating new risks and mechanisms for privacy violations. In conclusion, we advocate for a cautious and pragmatic approach regarding any simplification as these directly affect fundamental rights and freedoms. While competitiveness is necessary, the Commission must support organisations in a way that advances rather than ignores fundamental rights from the onset. For this, we request the Commission to investigate other measures that reduce the effort of enforcement by supporting organisations with their compliance instead of focusing on reduction or dilution of regulations. Indeed, we feel that the EU can and should utilise its highly valued rights and freedoms as the very measures for measuring competitiveness and innovation, rather than focusing only on the artificially amalgamated metrics regarding its economy. We reiterate that a healthy and competitive EU is one where the competition comes from a high degree of assurance and accountability, and not where such measures can be easily sidelined for economic reasons.
Read full response

Response to A European Strategy for AI in science – paving the way for a European AI research council

5 Jun 2025

Advances in AI, particularly the introduction of generative large language models, are a major disruptor of academic research posing novel challenges for responsible research assessment. Researchers must now carefully consider the risks of using AI in the conduct, sharing, and assessment of research upon values of reliability, honesty, respect, and accountability. In Europe, such AI risk assessments may need to address new regulatory requirements to undertake a Fundamental Rights Impact Assessment (FRIA) under the AI Act. It also requires rapid learning and exchange of best practice in conducting and attesting to AI risk management in research and its integration into responsible research conduct, access to, and assessment of policies developed by institutes and their funders. Researchers and research institutions need to engage with policy development on responsible research practice for situations where AI is used in the conduct, assessment, communication and sharing of research. There is an urgent need to accelerate policy and best practice development in addressing challenges to the responsible use of AI in academic writing and editing, data analysis, and literature reviews. Such policy development should support the efficient and transparent risk assessment of potential research harms in the use of AI, including those arising from amplification of inaccuracies and embedded biases and reduction of reproducibility and transparency of results. It should develop researcher training and awareness; mitigate the misuse and harmful application of results, and promote copyright and data protection. Support for research on advancing researcher practice in the responsible use of AI is needed to provide critical evidence and information sharing between Research Performing Organisations (RPO), Research Funding Organisations (RFOs) and regulatory oversight bodies for the AI Act, e.g. national competent authorities and the AI Office. Revised practice must ensure that research risk assessment for AI and other digital research outputs is mapped into both an ethics and fundamental rights impact analysis. The information produced from such AI risk management should be made available as a FAIR resource to accompany existing open research practices. We recommend analysing the risk of research teams adopting AI using stakeholder theory, develop open semantic model for enabling them to transparency attest to AI use in research outputs and prototype and trial tools that support such transparent communication between stakeholders. With these recommendations we aim to encourage investment in and coordination of efforts to ensure responsible integration of AI alongside human research expertise. Specifically, these recommendations aim to ensure that AI enhances rather than replaces human capabilities for the benefit of the academic community and supports our understanding of how the networks of collaborating stakeholders that produce, peer-review, publish, and cite publicly funded research can evolve to maintain quality and integrity while beneficially employing AI systems. The recommendations aim to promote an open, legally compliant and FAIR-compatible framework for transparently evaluating and attesting to the responsible use of AI in research. This will require support for interdisciplinary research and development, combining expertise in protections for fundamental rights in digital regulation and open FAIR Data Principles associated with AI use in knowledge production.
Read full response

Response to Apply AI Strategy

4 Jun 2025

The EU AI Act represents the worlds first transnational AI regulation with concrete enforcement measures. It builds on existing EU mechanisms for regulating health and safety of products but extends them to protect fundamental rights and to address AI as a horizontal technology across multiple application sectors. We argue that this will lead to multiple uncertainties in the enforcement of the AI Act, which coupled with the fast-changing nature of AI technology, will require a strong emphasis on comprehensive and rapid regulatory learning for the Act. We recommend therefore the adoption of a clearly parametrised regulatory learning space based on the provisions of the Act and propose such a structure where the population of oversight authorities, value chain participants, and affected stakeholders may interact to apply and learn from technical, organisational and legal implementation measures. Given the wide scope of regulatory learning defined in the Act we map out the likely interactions and information flows needed between different classes of actors involved in different types of learning activities anticipated in the Act. Based on this analysis we make recommendation on how existing EU practices for open, FAIR data sharing and data spaces can be adapted to support rapid, efficient and legitimate regulatory learning.
Read full response

Meeting with Magda Kopczynska (Director-General Mobility and Transport) and SMBC Aviation Capital Ltd

10 Mar 2025 · Exchange of views on the policy, industry and financing developments expected to scale-up the production and uptake of sustainable aviation fuels (SAF)