Finnish Center for Artificial Intelligence
FCAI
Finnish Center for Artificial Intelligence FCAI is a nation-wide community of experts in artificial intelligence in Finland, initiated by Aalto University, University of Helsinki, and VTT Technical Research Centre of Finland.
ID: 039049138497-12
Lobbying Activity
Meeting with Miapetra Kumpula-Natri (Member of the European Parliament, Shadow rapporteur for opinion)
10 Feb 2022 · Meeting on AI Act
Response to Requirements for Artificial Intelligence
4 Aug 2021
The highly commendable goal of the proposed act is to provide a legal framework in Europe that encourages innovation and investments in artificial intelligence, while ensuring that the results are lawful, safe and trustworthy, respecting human rights. In order to reach this goal, the legal framework needs to be understandable, transparent and adequately measured. In its proposed form, the act does not properly fulfill these requirements.
The main cause for unclarity lies in the scope of the regulation, which is not technology neutral, but is based on a definition of AI. By giving a definition of AI, the act implicitly defines a class of "non-AI systems", which together with the categorization of use cases, partitions the landscape into four classes:
A. High-risk (or prohibited) use case and an AI system.
B. High-risk (or prohibited) use case and a non-AI system.
C. Low-risk use case and an AI system.
D. Low-risk use case and a non-AI system.
It is evident that C and D are not the target of regulation, while A clearly is, but what about B? What is the message here: are for example social scoring systems allowed as long as the underlying technology does not match some definition of AI? As the answer from the moral perspective clearly must be "no", the proposal tries to circumvent this dilemma by making the definition of AI extremely broad, and by listing numerous sub-fields of AI, but this does not remove the basic problem: as long as there is a technical definition of AI, it forms a serious loophole by creating a class of (non-AI) software systems that by definition are not regulated by the proposed act. If the idea is to make the definition so broad that it covers all digital systems, then the formulation is just confusing.
A simple solution: define an AI system to mean any software system that is applied in a high-risk or prohibited use case defined in the act. This definition is easy to understand, unambiguous, technology neutral and completely future proof, covering all new technologies that may emerge in the future.
The above formulation also removes a serious cause for unclarity, since the attempt to define AI by listing a number of subfields of AI does not constitute a useful definition that can be used for unequivocally determining whether a certain technology is within the scope of the field or not, as the subfields themselves are not defined. The possibility to add more subfields in the list later does not improve the situation, and any attempts to "define" AI in this way are doomed to fail, and just create loopholes and are a serious cause for confusion. Luckily, as explained above, this is not required at all.
Other questions / causes for unclarity:
- What is the rationale for selecting the high-risk use cases listed in the proposal, why these sectors? The underlying logic would be important to understand as the EC reserves itself the right to amend the list later.
- What are the "essential public and private services" listed as a high-risk case? Are for example internet search engines essential public services? What about social media, for example for people whose income depends on their social media exposure?
- The spirit of the proposed act seems to be to regulate products or services that will be brought to the market, not scientific research, but this should be stated more clearly: what exactly is the scope, and what about joint research (for example company-university research projects), at which point does this type of activities enter the scope?
- What are the proposed "regulatory sandboxes" like: who defines them, who builds them, who maintains them, who can use them?
FCAI supports the risk-based approach adopted in the proposal, but would make this view even more central and make the act completely technology neutral by removing the futile and unnecessary attempts to define the technologies that can be used. This would create a more adaptive, understandable and future proof basis for regulation.
Read full responseResponse to Europe’s digital decade: 2030 digital targets
9 Mar 2021
The digital roadmap emphasizes two targets: building capacities and fostering the digital transformation. We agree that both of these components are vital, but we would like to emphasize that neither of the two is sufficient alone: capacities are necessary, but not sufficient without the means to foster the digital transformation. For example, it is important to build capacities like data platforms, data marketplaces and data economy, but data alone is useless unless we have disruptive technologies like machine learning and other AI technologies that can be used for utilizing the data. We are bringing this obvious point up since many initiatives in Europe seem to be very one-sided, only looking at the picture from one perspective, and this type of siloing is often implicitly supported by the funding instruments of the EU, and affect the way we operate. Meanwhile, what is sometimes neglected is the central question of how to bring the capacities and transformative technologies together, and what we need is means for preventing siloing and wasting resources in overlapping efforts, and support for interactive public-private innovation ecosystems building bridges between the efforts focusing on one side of the picture.
In developing expertise as a capacity, we would like to emphasize that educating more experts is again definitely one piece of the puzzle, but we also need to keep the experts in Europe so that they will not emigrate elsewhere, and what is more, we also need be able to recruit best talents from outside Europe, hopefully even reversing the brain drain. For supporting the recruitment of top talents, we need to build and promote high-profile European digital hubs (lighthouses) that offer lucrative career prospects, a possibility to work with top experts and collaboration with interesting partners. We can call these hubs centres of excellence, but it should be noted that excellence as a label is meaningless unless it is supported by true excellence, and the people we wish to bring in to Europe will surely know the difference.
We would also like to point out that while regulation is definitely needed in the digital world (too), currently the emphasis in the discussion has been on "preventive regulation" to e.g. protect privacy of individuals, which is important of course, but there are also many forms of supportive/enabling regulation like standards, certifications, guidelines and processes, related e.g. to use of AI, or data sharing and data economy policies (MyData etc.), that deserve more discussion and should be enforced too. What is crucial for Europe is to be able to build platforms and (both supportive and restrictive) regulation that allow the European players to keep European data in Europe, so that this invaluable raw material is not given away for free to international tech giants, making Europe dependent on the digital services they provide.
Finally, we would like to point out that regulation of AI is definitely needed as is regulation of any technology, but it should be noted that AI as a collection of various digital technologies is already in most cases covered by existing legal and ethical frameworks. Attempts to develop specific context-free/horizontal regulation for AI as technology are misguided and doomed to fail as we cannot even define a clear borderline between AI and other digital technologies, and it would be a moving target anyway. AI can and must be regulated like any digital technology, based on the intended use of the technology: e.g. face recognition for mass surveillance of people in public spaces is violating privacy and human rights, while the same technology used for opening a smart phone does not impose the same concerns.
Read full response