15 ETSI TR 104 065 V1.1.
1 (2025-05)
6 Article by article mapping of AI Act to ETSI
Standardization programme
6.1 Mapping AI act to ETSI
Table 2: Article by article mapping to ETSI standards work
Article Heading Summary of Primary text ETSI mapping
Chapter I, General provisions
1 Subject matter The purpose of this Regulation is to This is addressed by ETSI's Directives
improve the functioning of the internal and by the establishment of ETSI as
market and promote the uptake of an ESO under
human-centric and trustworthy Regulation 1025/2012 [i.38] and
Artificial Intelligence (AI), while subsequent revisions.
ensuring a high level of protection of
health, safety, fundamental rights
enshrined in the Charter, including
democracy, the rule of law and
environmental protection, against the
harmful effects of AI systems in the
Union and supporting innovation.
2 Scope Identifies who is addressed by the Not of direct relevance to ETSI as
regulation. ETSI produces standards in order to
allow those addressed by the scope to
meet their obligations set by the
regulation.
3 Definitions Defines terms used in the document. Mapped across ETSI's deliverables
into a format that meets ETSI's
drafting rules. Definitions in ETSI
deliverables are not normative. All of
the published terms used in ETSI's
documents are listed on ETSI's TEDDI
tool (https://webapp.etsi.org/Teddi/).
4 AI literacy Providers and deployers of AI systems This is addressed in part in Annex A of
shall take measures to ensure, to their the present document. The text in
best extent, a sufficient level of AI clause 4.3 of the present document
literacy of their staff and other persons also applies wherein it is identified that
dealing with the operation and use of many of the general purpose reports
AI systems on their behalf, taking into prepared by ETSI can be seen as
account their technical knowledge, making provision for building that
experience, education and training literacy.
and the context the AI systems are to
be used in, and considering the
persons or groups of persons on
whom the AI systems are to be used.
Chapter II: Prohibited AI practices
5 Prohibited AI practices Lists practices that are prohibited. As noted in clause 4.3 the mandate is
framed as what cannot be done,
whereas for standardization it has to
be framed somewhat differently. This
means what measures can be
provided that, when followed, prevent
the prohibited practices being placed
in the market. Some of the mandates
allow techniques to be applied only in
very particular contexts. This may not
be a standards issue unless the AI
facility is sufficiently autonomous to be
able to select its functionality based on
context.
ETSI
16 ETSI TR 104 065 V1.1.1 (2025-05)
Article Heading Summary of Primary text ETSI mapping
Chapter III: High Risk AI Systems
SECTION 1: Classification of AI systems as high-risk
6 Classification rules for Rules for classifying and therefore As noted in clause 5.4 of the present
high-risk AI systems determining if a system is high-risk. document the scope statement of a
standard, and the statement of
purpose of a product (as defined in
ETSI TS 104 224 [i.23]), will allow
some indication of the likelihood of the
resultant system being defined as
high-risk.
7 Amendments to As above.
Annex III
SECTION 2: Requirements for high-risk AI systems
8 Compliance with the States that all other parts of the Not required.
requirements section are mandatory and proof of
compliance is required.
9 Risk management A risk management system shall be ETSI produces a number of
system established, implemented, documents aimed at containing risk.
documented and maintained in The application of the Cyber Security
relation to high-risk AI systems. Controls series of ETSI
The risk management system shall be TR 103 305 [i.36] (a multipart
understood as a continuous iterative standard) to systems addresses all the
process planned and run throughout lifecycle aspects requested and has
the entire lifecycle of a high-risk AI applied those to AI in ETSI
system, requiring regular systematic TR 104 030 [i.26]. At the more detailed
review and updating. technical assessment of risk the ETSI
TS 102 165-1 [i.37] approach applies.
The specific assessment of risk as
applied to market placement of a
product identified in ETSI
TR 103 935 [i.34] may also apply.
10 Data and data High-risk AI systems which make use In part this is addressed by the
governance of techniques involving the training of transparency and explicability
AI models with data shall be obligations in ETSI TS 104 224 [i.23],
developed on the basis of training, and in the actions identified in the
validation and testing data sets that supply chain report (ETSI
meet the quality criteria referred to in TR 104 048 [i.25]) and also by the
paragraphs 2 to 5 whenever such data traceability actions given in ETSI
sets are used. TR 104 032 [i.24].
11 Technical The elements required are defined in This requires best practice of auditing
documentation Annex IV and has to include a DoC as of design and of the supply chain.
defined by Article 47. Several standards exist that address
this and are classified in ETSI
TR 104 029 [i.2].
12 Record-keeping Allow for the automatic recording of As stated in clause 4.3 of the present
events over the lifetime of the system. document the mandate is to automate
recording of events over the duration
of the lifetime of the system. While
technical specifications for the security
framework of AI computing platform
prepared by ETSI TC SAI can support
the mechanism of recording keeping
by protect the integrity of the logs
collected to guarantee the procedure
for transparency and provision of
information to deployers described in
Article 13.
13 Transparency and High-risk AI systems shall be designed The requirements stated in ETSI
provision of and developed in such a way as to TS 104 224 [i.23] apply to both static
information to ensure that their operation is and runtime transparency and
deployers sufficiently transparent to enable explicability.
deployers to interpret a system's
output and use it appropriately.
ETSI
17 ETSI TR 104 065 V1.1.1 (2025-05)
Article Heading Summary of Primary text ETSI mapping
14 Human oversight High-risk AI systems shall be designed It is noted that Article 14 may inhibit
and developed in such a way, the placement on the market of some
including with appropriate human- autonomous systems and particularly
machine interface tools, that they can of Agentic-AI systems. The discipline
be effectively overseen by natural of Functional Safety (FS) applied to
persons during the period in which such systems may allow their
they are in use. deployment without direct involvement
of "human in the loop" but rather allow
for reasonable oversight by natural
persons. The adoption of FS
approaches in AI is an item of open
study in ETSI.
15 Accuracy, robustness High-risk AI systems shall be designed The topic of metrics for accuracy,
and cybersecurity and developed in such a way that they robustness and cybersecurity is a
achieve an appropriate level of topic gaining interest in ETSI and
accuracy, robustness, and other SDOs. An initial requirement for
cybersecurity, and that they perform this is provided as part of the
consistently in those respects transparency and explicability
throughout their lifecycle. document, ETSI TS 104 224 [i.23].
SECTION 3: Obligations of providers and deployers of high-risk AI systems and other parties
16 Obligations of As per the title. Addressed across all of ETSI's AI
providers of high-risk output.
AI systems
17 Quality management Providers of high-risk AI systems shall This is consistent with the guidance
system put a quality management system in given for application of the Critical
place that shall be documented in a Security Controls of ETSI
systematic and orderly manner in the TR 103 305 [i.36] and ETSI
form of written policies, procedures TR 104 030 [i.26].
and instructions.
18 Documentation Requires records to be maintained for Not covered specifically in ETSI's AI
keeping 10 years after placement on the portfolio but the general controls of
market. ETSI TR 103 305 [i.36] apply.
19 Automatically In each case the article title is deemed See clause 5.4 of the present
generated logs self-describing. document.
20 Corrective actions and New work items establishing the AI
duty of information Common Incident Expression (AICIE)
alongside the development of the
Universal Cybersecurity Information
Exchange Framework (UCYBEX)
apply.
21 Cooperation with Not applicable.
competent authorities
22 Authorised In ETSI TS 104 224 [i.23] (see also
representatives of clause 5.4 of the present document) it
providers of high-risk is mandated that the liable party is
AI systems identifiable across the supply chain.
23 Obligations of Not an obvious domain for technical
importers standards.
24 Obligations of
distributors
25 Responsibilities along This is covered to an extent by each of
the AI value chain ETSI TR 104 032 [i.24] and ETSI
TR 104 048 [i.25]. It is also addressed
in part in the transparency and
explicability document ETSI
TS 104 224 [i.23] in ensuring
understanding of the value chain.
26 Obligations of Not directly applicable.
deployers of high-risk
AI systems
27 Fundamental rights The risk analysis methods developed
impact assessment for in ETSI (e.g. ETSI TS 102 165-1 [i.37])
high-risk AI systems address harm in abstract terms and
encourage assessment of rights of the
stakeholders.
ETSI
18 ETSI TR 104 065 V1.1.1 (2025-05)
Article Heading Summary of Primary text ETSI mapping
SECTION 4: Notifying authorities and notified bodies
28 to 39 Various Identifies specific actions of notifying Not particularly of concern to SDOs
authorities and notified bodies. but of particular interest to the process
of making products and services
available to the market. The SDO
activity in this is addressed in
Section 5 of the Act.
SECTION 5: Standards, conformity assessment, certificates, registration
40 Harmonised standards Reinforces the role of hENs giving As stated in clause 4.2 above "an
and standardisation presumption of conformity. SDO cannot simply choose to publish
deliverables an hEN, rather an hEN has to be
written against specific actions of the
EU" , therefore ETSI at the time of
preparation of the present document is
not active in preparing hENs but will
give assistance to relevant ESOs as
stated in clause 5.2 above.
41 Common The Commission may adopt, No current action (no implementing
specifications implementing acts establishing acts).
common specifications for the
requirements set out in Section 2 of
this Chapter (Requirements for
high-risk AI systems).
42 Presumption of Describes the normal process No specific actions or mapping at this
conformity with certain associated to presumption of time.
requirements conformity.
43 Conformity
assessment
44 Certificates
45 Information obligations
of notified bodies
46 Derogation from
conformity assessment
procedure
47 EU declaration of
conformity
48 CE marking
49 Registration
CHAPTER IV
TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS
50 Transparency Providers shall ensure that AI systems This is addressed by ETSI
obligations for intended to interact directly with TS 104 224 [i.23] (see also clause 5.4
providers and natural persons are designed and of the present document) for both
deployers of certain AI developed in such a way that the static and run time conditions of the AI
systems natural persons concerned are system.
informed that they are interacting with
an AI system, unless this is obvious
from the point of view of a natural
person who is reasonably well-
informed, observant and circumspect,
taking into account the circumstances
and the context of use.
ETSI
19 ETSI TR 104 065 V1.1.1 (2025-05)
Article Heading Summary of Primary text ETSI mapping
CHAPTER V
GENERAL-PURPOSE AI MODELS
51 Classification of Various (applies much of prior As for mappings in prior chapters.
general-purpose AI chapters to models that are not
models as general- specifically identified as high risk).
purpose AI models
with systemic risk
52 Procedure
53 Obligations for
providers of general-
purpose AI models
54 Authorised
representatives of
providers of general-
purpose AI models
55 Obligations of
providers of general-
purpose AI models
with systemic risk
56 Codes of practice The AI Office shall encourage and ETSI has recently completed ETSI
facilitate the drawing up of codes of TS 104 223 [i.35] that has been
practice at Union level in order to developed from the UK Code of
contribute to the proper application of Practice after a public consultation
this Regulation, taking into account involving many EU and global
international approaches. stakeholders.
CHAPTER VI
MEASURES IN SUPPORT OF INNOVATION
57 through Various The measures support a regulatory Not directly applicable but may be
63 sandbox to allow stakeholders to used in collaboration with the "proofs
develop and innovate in a controlled of concept" sandbox outlined in ETSI
environment where regulators can TR 104 067 [i.19].
facilitate testing whilst optimising
regulatory oversight.
CHAPTER VII
GOVERNANCE
64 to 69 Various Establishes the AI Office and No direct standards action expected,
associated bodies. however ETSI may wish to ensure that
communication from the AI Office and
associated bodies is communicated to
ETSI members.
CHAPTER VIII
EU DATABASE FOR HIGH-RISK AI SYSTEMS
71 EU database for high- The Commission shall, in collaboration No specific ETSI activity is foreseen.
risk AI systems listed with the Member States, set up and
in Annex III maintain an EU database containing
information relating to the registration
of high-risk AI systems.
CHAPTER IX
POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCE
72 Post-market This is being addressed by the
monitoring by creation of new work items in both TC
providers and post- SAI and in TC CYBER to report and
market monitoring plan share vulnerability information,
for high-risk AI misbehaviour, and other risk factors.
systems This work will specify the AI Common
73 Reporting of serious Incident Expression (AICIE) and work
incidents alongside the development of the
74 to 94 Various Addresses the organisation of market Universal Cybersecurity Information
surveillance and how it interacts with Exchange Framework (UCYBEX).
other stakeholders.
ETSI
20 ETSI TR 104 065 V1.1.1 (2025-05)
Article Heading Summary of Primary text ETSI mapping
CHAPTER X
CODES OF CONDUCT AND GUIDELINES
95 Codes of conduct for The AI Office and the Member States ETSI has recently completed ETSI
voluntary application of shall encourage and facilitate the TS 104 223 [i.35] that has been
specific requirements drawing up of codes of conduct, developed from the UK Code of
including related governance Practice after a public consultation
mechanisms, intended to foster the involving many EU and global
voluntary application to AI systems. stakeholders that may be instrumental
in achieving the objectives of this
article.
96 Guidelines from the The Commission shall develop As above.
Commission on the guidelines on the practical
implementation of this implementation of this Regulation [i.1].
Regulation
CHAPTER XI
DELEGATION OF POWER AND COMMITTEE PROCEDURE
97 Exercise of the Addressed to EU. Not applicable to ETSI.
delegation
98 Committee procedure
CHAPTER XII
PENALTIES
99-101 Various Identifies the penalties if a provider Not strictly relevant to SDOs. Whilst
fails to comply to the requirements set SDO members may be subject to the
out by the legislation. identified penalties the role of the SDO
here is to offer technical means that
limit the risk of being subject to
penalties.
CHAPTER XIII
FINAL PROVISIONS
102 - 110 Amendments to Identifies where existing regulation is This will be further evaluated by TC
existing regulations directly impacted by the AI Act [i.1]. SAI and OCG AI and actions given to
relevant ETSI TBs where required.
111 AI systems already Large-scale IT systems that have No specific action from ETSI.
placed on the market been placed on the market or put into
or put into service and service before 2 August 2027 shall be
general-purpose AI brought into compliance with this
models already placed Regulation by 31 December 2030.
on the marked
112 Evaluation and review The Commission shall assess the No specific action from ETSI.
need for amendment of the list set out
in Annex III and of the list of prohibited
AI practices laid down in Article 5,
once a year following the entry into
force of this Regulation.
113 Entry into force and Applies from 2nd August 2026 with No specific action from ETSI. The
application some exceptions: harmonised standards required (see
Articles 1 through 5 apply from clause 5.2 of the present document)
2nd February 2025 (i.e. in force now) have to take account of these dates as
Chapter III section 4 and others apply hENs should be available to give
from 2nd August 2025. presumption of conformity prior to the
Article 6(1) applies from relevant in force dates.
2nd August 2027.
6.2 Mapping ETSI TC SAI and ISG AI output to AI act
Table 2 can be presented in a different way that looks at some specific output of ETSI, in this instance from TC SAI,
and mapping from the output back to specific articles of the AI Act [i.1]. Prior to this a summary of the security
principles for AI, defined in ETSI TS 104 223 [i.35], is given. The intent of ETSI TS 104 223 [i.35] as indicated by its
title is to define "Baseline Cyber Security Requirements for AI Models and Systems" and thus is intended to ensure, in
addition to security of the AI model and system, that users of ETSI TS 104 223 [i.35] are able to prepare their products
and services to be placed on the market and to conform to any applicable regulation including the AI Act [i.1].
ETSI