AI Standards Lab
The AI Standards Lab is an independent non profit, with a charitable purpose of "developing artificial intelligence standards and risk management frameworks".
ID: 060933793069-24
Lobbying Activity
Response to Digital package – digital omnibus
14 Oct 2025
The AI Standards Lab ( https://www.aistandardslab.org/ ) is a non-profit organisation, gathering several independent experts who aim to support government initiatives for AI technology regulation. We welcome the opportunity to give our input on the Commissions Digital Omnibus (Digital Package on Simplification). Our input to this consultation concerns the EU AI Act, where we also consider its interplay with other regulations. Most of our experts live in the EU. We have been active contributors to several initiatives to clarify the AI Act and ease its application. We have participated in the EUs Code of Practice writing for General Purpose AI (GPAI CoP), the CEN-CENELEC JTC21 effort to write standards in support of the high-risk AI parts of the AI Act, and in various earlier AI Act related consultations. Our general viewpoint is that the AI Act imposes necessary and proportional constraints and burdens on AI developers and deployers, necessary to protect health, safety, and fundamental rights. The main problem creating unnecessary and avoidable burdens on these parties right now is that, when it comes to high-risk AI, there is still a lack of guidance, clear standards, codes or common specifications, and consultancy services that would work to usefully inform such parties. This short term problem should not be solved by just deleting parts of the AI Act, or by exempting many organisations from it. The solution lies in ensuring that the guidance, standards, codes or common specifications, and consultancy services do appear. Major changes to the AI Act at this stage would likely create more confusion than clarity, increasing the burden on stakeholders. We also believe that creating a full formal delay for the entry into force of the high-risk provisions in the Act, a delay applying to all actors, would send entirely the wrong message. For GPAI models, and other non-high-risk parts of the AI Act, we do not see a similar problem where guidance is lacking or will be too late. Please see the attached document for our detailed analysis and recommendations. Section 2 of the attached document considers specific parts of the AI Act in turn.
Read full response