Ada Lovelace Institute

Ada

The Ada Lovelace Institute was established by the Nuffield Foundation in early 2018 and is legally party of the Nuffield Foundation.

Lobbying Activity

Response to Digital Fairness Act

24 Oct 2025

Questions around digital fairness are becoming increasingly relevant. While well researched forms of unfair practices, like dark patterns are still prevalent, new ways of exercising influence over consumers are emerging, with the rise of increasingly personable and user personalised AI systems, which we refer to as Advanced AI Assistants. Over the past year, the Ada Lovelace Institute has been conducting research on the development of and risks associated with these Advanced AI Assistants (Assistants) which we have found to pose significant consumer protection risks. Because of their combination of personality and personalisation, Assistants can be very easy for users to anthropomorphise, form emotional attachments with, and to trust. At the same time, Assistants can exercise influence over consumers at every stage of the purchasing process, including recommending products, finding products and curating lists of options, deciding how to present product options to consumers, and completing the purchase on a consumers behalf. This combination makes the risk of undue influence particularly acute. Assistants thus enhance the scale, speed, precision, and potency of how consumers can be targeted. We recommend that the Digital Fairness Act clarifies pre-existing legal concepts in consumer protection law to ensure their applicability to concerns arising from Assistants, such as undue influence. We further recommend transparency measures that would allow consumers access to an understandable explanation on how products have been curated and presented to them, as well as clear signposting of whenever a message from an Assistant includes paid advertising. To prevent Assistants from exploiting consumers vulnerable moments (e.g. fatigue, illness) with product placements, consumers should be able to easily access a no ads version of the Assistant service in their settings. Finally, given the challenges introduced by new forms of AI for consumer protection, manipulative practices, and extractive data gathering practices, it is essential that the Digital Fairness Act is accompanied by strong legal standards that protect consumers privacy and data online. As Advanced AI Assistants, and particularly AI Agents, usually have deep access to users (sensitive) data and their devices (soft- and hardware) to function, it is important that provisions providing limitations how such data is processed and around how system level access is exploited are strengthened rather than weakened. We therefore recommend stronger enforcement in the context of dark patterns that influence consumers choices to share their data with platform providers and third parties, and to consider any simplification measures within the EUs digital acquis within context of ensuring sufficient protection of consumers and their data in this AI age. Please read our full response with more detailed concerns and recommendations in the attached document.
Read full response

Meeting with Lucilla Sioli (Director Communications Networks, Content and Technology)

25 Apr 2025 · Discussion on the idea of an "AI Taskforce", in the context of EU institutional readiness for advanced AI developments.

Meeting with Werner Stengg (Cabinet of Executive Vice-President Henna Virkkunen)

14 Mar 2025 · AI policy

Meeting with Werner Stengg (Cabinet of Executive Vice-President Margrethe Vestager)

6 Mar 2024 · AI Act