Susanna Lindroos-Hovinheimo: What did the AI regulation look like? (Constitutional Blog)
The EU AI Regulation was a long and thorough process. Now the text is ready and we can reflect on how it turned out. The first thing to say is that it turned out to be a long one. The Regulation contains 113 articles and one article is typically longer than one A4 strip. To declare its scope in Article 2 alone, the Regulation needs 12 paragraphs, several of which contain sub-paragraphs. The preamble contains 180 paragraphs. There are thirteen annexes.
The first phase of implementation will come six months after the regulation enters into force – in the autumn – so Europe, through its technology lawyers, should get down to the reading as soon as possible. In the final stages of the regulation, the content was not changed very much, but enough to make some of the articles unfamiliar even to experts.
The regulation aims to achieve overall harmonisation. By choosing Article 114 TFEU as the legal basis, this Regulation transfers competence for the regulation of all AI to the Union. Member States will no longer be able to legislate on what is provided for in the Regulation, unless the Regulation leaves explicit national leeway. The regulation leaves little of it sprinkled over the different articles.
During the legislative process, a fundamental difficulty was the definition of AI, which can now be found in Article 3. It defines an AI system as:
“a machine system designed to operate with varying degrees of autonomy, which can potentially adapt after deployment, and which can produce outputs, such as predictions, content, recommendations or decisions, based on the input data it receives, to achieve specific explicit and implicit goals, which can affect real or virtual environments.”
The definition is linked to the application of the Regulation as a whole. If it is an AI system, the regulation applies, otherwise it does not. Therefore, it will be of paramount importance in the interpretation to clarify what the definition means. The provision leaves unclear, among other things, whether rule-based systems will ever be covered. With this in mind, the article is not the easiest to draft. It may be that the only term that is easy to interpret is “machine”, and we shouldn’t be so sure about that either. In the future, we will at least consider in the case law what kind of action is autonomous to varying degrees, what degree of adaptation is possible under the definition and what are the explicit and implicit objectives that each system produces. – Not to mention real or virtual environments… so the definition became broad.
The scope became broad anyway, but exceptions were also included. Scientists are delighted that a science exception was included in the regulation. The Regulation does not apply to AI systems or AI models specifically designed and implemented for the sole purpose of scientific research and development, nor to their outputs. The Regulation also does not apply in the area of national security. This delimitation of scope is commonplace in EU law, but it is quite strong in this Regulation. The regulation states several times in different wording in Article 2 that it does not apply to AI systems used for military, defence or national security purposes.
The Commission’s original idea of a risk-based approach, which means that the regulation divides schemes into three categories: prohibited, high-risk and other, survived until the final regulation. The fourth category, general-purpose AI models, was created during the legislative process. This refers in particular to ChatGPT and similar systems.
Prohibited systems may not be used in the Union and may not be imported into the Union. Their definition is not easy to read, as the schemes are prohibited under certain conditions and circumstances. For example, Article 5(1)(a) of the Regulation prohibits:
“the placing on the market, putting into service or use of an artificial intelligence system that uses subliminal techniques not knowingly perceived by a person or intentionally manipulative or deceptive techniques intended to distort or materially distort the behaviour of a person or group of persons, thereby substantially impairing their ability to make an informed decision and causing them to make a decision that they would not have otherwise made, in a manner that causes or is likely to cause substantial harm to that person, another person or group of persons.”
The definition is not welcome, in particular because it includes the terms substantially distorting, substantially impairing, likely to cause and substantial harm. Unfortunately, the regulation is a cornucopia of such expressions, which, when read, can only make one feel sympathy for the Court of Justice. Reading the definitions of high-risk schemes strengthens the feeling of compassion. For many schemes, it will be difficult to assess whether they are high risk or not under the Regulation.
At times, the regulation also provides humorous passages. The issue is serious and the aim of regulation is undoubtedly noble. The issues at stake are of social importance and far-reaching. However, the list of high-risk systems has been extended to include:
“Systems intended for use in assessing learning outcomes, including where those outcomes are used to guide the learning process of individuals in education and training institutions at all levels.”
As easy as it would be to use AI to mark students’ exam answers, the EU believes that (it’s not forbidden, but) there are responsibilities and obligations. One can only wonder why this is a matter of such magnitude that it is necessary to legislate at Union level.
The list of high-risk systems also includes this one:
“Systems intended to be used for the observation of students and the detection of prohibited behaviour during examinations in or in connection with education and training institutions.”
In practice, this could mean that human eyes will continue to be needed to monitor trampling unless an educational institution wants to undertake the documentation and certification work that the regulation requires for the use of high-risk systems.
A slight relief for those working with systems is that the final regulation added a delimitation clause to Article 6 in addition to the definitions of high-risk systems. However, it states that a scheme is not high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including that it does not materially affect the outcome of the decision, subject to certain conditions. This article is also quite opaque.
The regulation of high-risk schemes is based on the fact that these schemes are subject to a number of requirements, in particular as regards quality control. A significant part of the articles of the Regulation concern these obligations. They include requirements for risk management systems, data quality, technical documentation, data storage, transparency and human oversight. There are other responsibilities too, plenty of them.
One result of the regulation is the creation of new supervisory authorities. They will be set up at both EU and national level. The Regulation also gives people, such as consumers, access to redress. According to Article 85, natural and legal persons who consider that the provisions of the Regulation have been infringed have the right to lodge a complaint with a market surveillance authority. The supervisory authorities will be strengthened by the possibility of imposing administrative fines. An interesting detail is provided by Article 100, which also allows administrative fines to be imposed on Union institutions, bodies, offices and agencies. However, the fines are relatively small (750 000, or 1.5 million if the Union uses a banned system) compared to, for example, the fines imposed on providers of general-purpose AI models (3% of their total worldwide turnover in the previous financial year, or €15 million, whichever is higher).
The regulation contains many interesting details, only a fraction of which are presented in this brief overview. However, it is clear that this is important but complex legislation.
Susanna Lindroos-Hovinheimo, Professor, University of Helsinki
The article is part of the GenerationAI project, funded by the Strategic Research Council, which is studying the regulation of artificial intelligence, particularly from the perspective of children’s rights.
NOTE! This article was originally published as a blog article on 26.3.2024 on the Constitution blog. The original blog article is at: https://perustuslakiblogi.wordpress.com/2024/03/26/susanna-lindroos-hovinheimo-minkalainen-tekoalyasetuksesta-tuli/
