POLICY
POLICY PAPER
PAPER
Schuman Paper
n°757
16 July 2024
th
What to take away from the
European law on artificial
Thaima SAMMAN
Benjamin DE VANSSAY
intelligence
On top of all these initiatives and legislations, the European Commission has proposed two directives
to the regulation on Artificial Intelligence (“Artificial Intelligence Act”, hereafter AI Act) is a landmark
piece of EU legislation in the field of AI. One of its primary aims is to regulate the use of this
technology in a number of areas based on a risk-based approach. In that respect, the AI Act sets
gradual obligations for the different parties involved in the AI value chain depending on the level
of risk that the use of AI raises in concrete use cases. The AI Act should therefore be viewed as a
targeted intervention, and not a cross-cutting legislation like the General Data Protection Regulation
(GDPR).
The AI Act was adopted by EU co-legislators in May 2024 and will enter into force 20 days after its
publication in the Official Journal of the European Union on July 12. It will apply from August 2, 2026.
Meanwhile, the Commission has launched the AI Pact, a voluntary initiative inviting AI providers to
comply with the key obligations of the AI Act in advance of its entry into force.
1. The AI Act: one piece in a complex AI Quantum Flagship and the European High
regulatory puzzle Performance Computing (EuroHPC). The EU
Data Strategy (specifically, the Free Flow of
The AI Act is also part of a broader regulatory Data Regulation and the Data Act) also aims to
framework, which can be sketched out as drive competition in the field of cloud computing
follows: and ensure secure data storage.
- Data: on the one hand, there is the General - Algorithms: the AI Act seeks to address
Data Protection Regulation (GDPR), which certain risks stemming from the use of AI
restricts access to personal data to protect the systems which are mostly related to how their
privacy of individuals, and includes specific algorithms work. In addition to the AI Act,
safeguards on profiling methods, and on the the Commission plans to take targeted issue-
other, the EU Data Strategy, which aims to driven initiatives in specific areas, such as the
increase the sharing and availability of any use of AI and algorithms in the workplace. It
kind of data to foster innovation. The latter has, in fact, been mulling using the provisions
encompasses several initiatives and pieces of on algorithmic management contained in the
legislation, including the Data Act, the Data proposed Directive on platform workers as a
Governance Act, regulations on the European blueprint.
Data Spaces and the Directive on Public Sector
Information. establish a horizontal liability framework for
AI systems: a revision of the Product Liability
- Infrastructure: various initiatives have begun Directive, which seeks to harmonize national
to step up the infrastructure capabilities of rules on liability for defective products, and an
the EU, with the view to boosting innovation AI Liability Directive which shares the same goal
in AI, such as European Open Science Cloud, for non-contractual tort-based liability rules.
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
2. Summary of the AI Act 2.2 The regulation of AI systems: prohibited and
high-risk AI systems
2.1 Scope and definitions
2 The AI Act has a very broad scope and a strong
The AI Act distinguishes between four categories of
use cases depending on their level of risk for health,
extraterritorial reach, as it would apply to any AI safety and fundamental rights. Specific requirements
system having an impact in the EU, regardless of the applying to providers and users of these systems are
provider’s place of establishment. Specifically, the AI attached to each category.
Act would apply when the AI system is placed on the
market or put into service in the EU, when a user is The AI Act also regulates GPAI models, though with a
located in the EU or when the output is used in the EU. different approach, effectively establishing horizontal
rules applicable to all providers of GPAI models falling
AI itself is defined in very broad terms in the AI Act. within the scope of the regulation.
It covers any “machine-based system designed to
operate with varying levels of autonomy and that a) Prohibited AI practices
may exhibit adaptiveness after deployment and that,
for explicit or implicit objectives, infers, from the The AI Act prohibits the placing on the market, the
input it receives, how to generate outputs such as putting into service or the use of the following AI
predictions, content, recommendations, or decisions systems (with exceptions for certain use cases):
that can influence physical or virtual environments”.
- AI systems that deploy subliminal techniques beyond
The AI Act distinguishes between AI systems and a person’s consciousness or purposefully manipulative
General Purpose AI models (GPAI), which are AI or deceptive techniques;
models trained with a large amount of data, using - AI systems that exploit any of the vulnerabilities of a
self- supervision at scale and which can competently person or a specific group of persons due to their age,
perform a wide range of distinct tasks. disability or a specific social or economic situation;
- Biometric categorization systems that categorize
It is worth noting that the AI Act provides for several individually natural persons based on their biometric
exceptions: data to deduce or infer some sensitive attributes;
- AI systems for social scoring purposes;
- AI systems and models that are developed and used - Use of ‘real-time’ remote biometric identification
exclusively for military, defense and national security systems in publicly accessible spaces for the purpose
purposes; of law enforcement, with some important exceptions;
- AI systems and models specifically developed and - AI systems for making risk assessments of natural
put into service for the sole purpose of scientific persons in order to assess or predict the risk of a
research and development; natural person committing a criminal offence;
- Any research, testing or development activity - AI systems that create or expand facial recognition
regarding AI systems or models prior to their being databases through the untargeted scraping of facial
placed on the market or put into service; images from the internet or CCTV footage;
- AI systems released under free and open-source - AI systems that infer the emotions of a natural person
licenses, except where they fall under the prohibitions in situations related to the workplace and education,
and except for the transparency requirements for with some exceptions.
generative AI systems.
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
b) High-risk AI systems democratic processes: migration, asylum and border
control management; administration of justice and
The regulation of high-risk AI systems makes up democratic processes.
the bulk of the AI Act. It sets out rules for the
qualification of high-risk AI systems, as well as a The AI Act also provides the possibility for providers
3
number of obligations and requirements for such of high-risk AI systems to demonstrate that their
systems and the various parties in the value chain, systems are not high-risk (dubbed “the filter”)
from providers to deployers. and do not materially influence the outcome of the
decision- making process. To this end, providers
I. Qualification of high-risk AI systems must demonstrate that they meet at least one of the
following conditions:
The AI Act qualifies as high-risk some AI systems
that have a significant harmful impact on the health, (a) the AI system is intended to perform a narrow
safety, fundamental rights, environment, democracy procedural task;
and the rule of law. More specifically, the AI Act (b) the AI system is intended to improve the result of
establishes two categories of high-risk AI systems: a previously completed human activity;
(c) the AI system is intended to detect decision- making
AI systems are caught by the net of EU product safety patterns or deviations from prior decision-making
rules (toys, cars, health, etc.), if they are used as patterns and is not meant to replace or influence the
a safety component of a product or are themselves previously completed human assessment, without
a product (e.g. AI application in robot- assisted proper human review;
surgery); (d) the AI system is intended to perform a preparatory
task to an assessment relevant for the purposes of the
- AI systems listed in an annex to the regulation use cases listed in Annex III.
(Annex III). This Annex provides a list of use cases
and areas where the use of AI is considered to be II. Main requirements for high-risk AI systems
high risk. It may be amended or supplemented by and obligations for parties in the AI value chain
delegated acts adopted by the European Commission
on the basis of certain criteria. In short, the following First, the AI Act lays down a series of requirements for
areas and AI systems are concerned: high-risk AI systems:
- Biometrics: remote biometric identification systems, - Risk management: establishing a risk management
some biometric categorization systems, emotion system throughout the entire life cycle of the HRAI
recognition systems; system;
- Critical infrastructure: AI systems intended to be - Data governance: training the system with data and
used as safety components in the management and datasets that meet certain quality criteria;
operation of critical infrastructure; - Technical documentation: Drawing-up technical
- Education and workplace: some AI systems used documentation that demonstrate compliance with the
in education and vocational training; AI systems AI Act before the placing on the market ;
intended to be used for recruitment or selection - Record-keeping: enabling the automatic recording of
of job candidates or to make decision in the work events (‘logs’) over the duration of the lifetime of the
relationship; system;
- Access to essential services: AI systems for the - Instructions for use: ensuring that deployers can
access to and enjoyment of essential private services interpret the system’s output and use it appropriately,
and essential public services and benefits; including through detailed instructions;
- Law enforcement, justice, immigration and - Human oversight: developing systems in such a
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
way that they can be effectively overseen by natural c) Limited risk AI systems
persons;
- Accuracy, robustness and cybersecurity: developing This third category applies to providers and deployers
4 systems in such a way that they achieve an appropriate
level of accuracy, robustness, and cybersecurity.
of generative AI systems and deployers of emotion
recognition or biometric categorization systems, which
must inter alia comply with the following transparency
Second, the AI Act lays down a series of obligations for requirements:
the different parties involved in the value chain, namely
the providers, importers, distributors and deployers, - Chatbots: informing the natural persons concerned
along with rules to determine the distribution of that they are interacting with an AI system;
responsibility when, for instance, one of these parties - Generative AI: ensuring that the outputs are marked
make a substantial modification to an AI system. in a machine-readable format and detectable as
artificially generated or manipulated (watermarking);
Most of the obligations are placed on providers, - Deepfakes: labelling the content as artificially
including, as regards: generated or manipulated or informing people when
the content forms part of an evidently artistic, creative,
- Compliance and registration: the obligation to register satirical or fictional work or program;
their systems in a dedicated EU database and draw up - Generated news information: disclosing that the
the EU declaration of conformity; content has been artificially generated or manipulated,
- Quality management system: the obligation to put unless it has undergone human review or editorial
in place a quality management system that ensures control.
compliance with the AI Act;
- Documentation keeping: the obligation to keep at 2.3 Regulation of general purpose AI (GPAI)
the disposal of national competent authorities a set of
documentation (technical documentation, history with The AI Act provides for a two-tier regulation of GPAI
notified bodies, etc.); models. The first layer of obligations applies to all GPAI
- Logs: the obligation to keep the automatically models, while the second layer applies only to GPAI
generated logs for a period appropriate to the intended models with systemic risks.
purpose of the HRAI system, and at least 6 months;
- Corrective actions and duty of information: the In both cases, the Commission retains significant
obligation to take immediate measures in case of non- powers to determine how compliance with the
compliance with the AI Act and inform the market requirements will be achieved. It will be able to work
surveillance authority. with industry and other stakeholders to develop codes
of practice and harmonized standards for compliance.
The other parties in the value chain are mostly
responsible for ensuring that the AI systems that they a) Horizontal requirements for GPAI models
distribute or incorporate in their own services are
compliant. Under the AI Act, the following obligations are placed
on the providers of GPAI models, regardless of whether
Additionally, when the deployer is a public body or their model are used in a high-risk area:
a private operator providing essential services, it is
required to carry out a fundamental rights impact - Drawing up and keeping technical documentation
assessment. (inter alia training, testing process and evaluation
results);
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
- Providing documentation to users integrating the GPAI one AI regulatory sandbox in each member state. A
model in their own AI systems (including information sandbox is a framework set up by a regulator that
about the limitations and capabilities of the model); allows businesses, in particular start-ups, to conduct live
- Putting in place a policy to respect EU copyright law;
- Publishing a detailed summary of the content used
experiments with their products or services in a controlled
environment under the regulator's supervision.
5
for training the model.
The AI Act lays down detailed rules concerning the
However, providers of non-systemic open source establishment and the functioning of AI regulatory
models are exempt from the first two obligations sandboxes including rules on the further processing of
and the definition of open source is narrow as it only personal data for developing certain AI systems with
concerns “models released under a free and open public utility.
license that allows for the access, usage, modification,
and distribution of the model, and whose parameters, 2.5 Governance and sanctions
including the weights, the information on the model
architecture, and the information on model usage, are The AI Act establishes a very complex and hybrid
made publicly available”. governance framework, with implementation and
enforcement powers split between the EU and national
b) Requirements for GPAI models with systemic levels.
risks
The European Commission will have a central role in
The AI Act defines GPAI models with systemic risks as the governance and implementation of the AI Act.
ones with “high-impact capabilities” or, in other words, In a nutshell, it will be responsible for enforcing the
the most capable and powerful models. provisions relating to GPAI models, harmonizing
the application of the AI Act across the EU, defining
Pursuant to the AI Act, any model whose cumulative compliance with the AI Act and updating some critical
amount of computation used for its training measured aspects of the regulation. At national level, regulators
in FLOPs is greater than 10^25 should be presumed to will be responsible for enforcing all provisions relating
have “high-impact capabilities”. However, the regulation to prohibited and high-risk AI practices.
leaves significant leeway for the Commission to rely on
other criteria and indicators to designate a model as a) EU level
having systemic risk.
The European Commission has established the AI Office
On top of the first layer of obligations, providers of GPAI to deal with the implementation and enforcement of
models with systemic risk are required to: the AI Act. The AI Office is a new agency established
as part of the Directorate-General for Communications
- Perform model evaluation with standardized protocols Networks, Content and Technology.
and tools;
- Assess and mitigate possible systemic risks at EU level; The Commission will be inter alia responsible for:
- Report serious incidents and corrective measures to the
European Commission and national authorities; - Enforcing all provisions relating to GPAI models: The
- Ensure an adequate level of cybersecurity protection. Commission is given new powers for this purpose: to
request documents and information; to engage in a
2.4 Measures in support of innovation structured dialogue; to carry out assessments; to
require corrective measures; to order the withdrawal
The main measure foreseen in the Commission’s or recall of the model; to access GPAI models with
proposal is the mandatory establishment of at least systemic risks through APIs; etc.
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
- Adopting delegated acts on critical aspects of the AI Act, The AI Act also provides for the involvement of authorities
such as to amend the list of high-risk AI systems in Annex III in charge of fundamental rights and sectoral regulators in
or to amend the conditions for the self-assessment of high- areas falling within their own fields of competence, such as
6 risk;
- Issuing guidelines and elaborating codes of conduct on the
financial regulators and data protection authorities whichever
is higher.
practical implementation of the AI Act. - Non-compliance with other provisions: administrative fines
of up to €15,000,000 or up to 3% of the total worldwide
The Commission will be supported by three advisory bodies: annual turnover, whichever is higher.
- The European AI Board (the “Board”): composed of one
representative per member state, with the Commission 2.6 Implementation timeline
and the European Data Protection Supervisor joining as
observers. The Board will act as a cooperation platform The AI Act was published in the EU Official Journal on 12 July
for national authorities in cross-border cases and will also 2024. It will enter into force 20 days after the publication and
be tasked with issuing opinions on soft law tools, such as will be applied gradually:
guidelines and codes of conduct;
- The Advisory Forum (the “Forum”): composed of a - Rules on prohibited AI practices will apply 6 months after its
balanced selection of experts from the industry, civil entry into force (early 2025);
society and academia, along with representatives from - Codes of practice for GPAI models should be issued, at the
the EU Cybersecurity Agency (ENISA) and the main EU latest, by the Commission 9 months after its entry into force
standardization bodies; (Q1 2025);
- The Scientific Panel of Independent Experts (the - Rules concerning GPAI models will apply 12 months after
“Panel”): composed of independent experts selected by its entry into force (mid-2025), which is also the deadline for
the Commission. These experts will advise and support the the designation of national market surveillance authorities
Commission in the implementation of the AI Act, in particular and the issuance of some guidelines on high-risk AI systems
with regards to GPAI. They will also support the work of by the Commission;
national authorities at their request. - The Commission will have to issue guidelines on the
classification of high-risk AI systems at the latest 18 months
b) National level after its entry into force (early 2026);
- Rules concerning high-risk AI systems listed in Annex III
Member states will have to designate an independent will apply 24 months after its entry into force (mid-2026);
regulatory authority acting as a market surveillance authority - Rules concerning other high-risk AI systems will apply 36
responsible for the AI Act’s application at national level. months after its entry into force (mid- 2027).
This market surveillance authority must be designated Finally, member states will have to designate at least one
pursuant to Regulation (EU) 2019/1020, which frames notifying authority, responsible for setting up and carrying
market surveillance and product compliance in the EU for out the necessary procedures for the designation and
a wide range of products. This regulation gives significant notification of conformity assessment bodies.
enforcement powers to the national authorities, such as the
power to conduct checks on products, request and obtain c) Financial penalties
access to any information related to the product, request
corrective actions or impose sanctions. Pursuant to the AI On top of being able to request corrective actions, national
Act, national market surveillance authority will receive extra authorities and the Commission will be able to impose
powers, such as a power to request access to the source fines, the amount of which will depend on the nature of the
code or to evaluate the high-risk self-assessment. infringements: Non-compliance with prohibited AI practices:
administrative fines of up to €35,000,000 or up to 7% of the
total worldwide annual turnover,
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024
What to take away from the European law on artificial intelligence
Thaima Samman
Member of the Paris and Brussels bars, founder of
SAMMAN Law Firm.
Benjamin de Vanssay
Member of the Brussels Bar
You can read all of our publications on our site:
www.robert-schuman.eu/en
Publishing Director: Pascale JOANNIN
ISSN 2402-614X
The opinions expressed in this text are the sole responsibility of the author.
© All rights reserved, Fondation Robert Schuman, 2024
THE FONDATION ROBERT SCHUMAN, created in 1991 and acknowledged by State decree in 1992, is the main
French research centre on Europe. It develops research on the European Union and its policies and promotes the content
of these in France , Europe and abroad. It encourages, enriches and stimulates European debate thanks to its research,
publications and the organisation of conferences. The Foundation is presided over by Mr. Jean‑Dominique Giuliani.
FONDATION ROBERT SCHUMAN / SCHUMAN PAPER N°757 / 16TH JULY 2024