Understanding the
EU AI Act
The EU AI Act is a landmark regulation aimed at governing the use of
artificial intelligence within the European Union. It establishes a risk-based
framework for AI systems, categorizing them into different risk levels and
imposing corresponding obligations on providers and deployers. This
comprehensive legislation seeks to balance the benefits of AI with the
potential harms, ensuring responsible development, deployment, and use of
AI technologies while safeguarding fundamental rights and fostering trust in
AI systems.
© Data Protection Schemes Dublin Ireland
, ,
Key Obligations of the EU AI
Act
1 Prohibited Practices
Certain AI practices deemed inherently harmful are strictly
prohibited, including real-time biometric identification in public
spaces for law enforcement, social scoring, and manipulative or
exploitative AI systems.
2 High-Risk AI Systems
Stringent requirements apply to high-risk AI systems, including
conformity assessments, risk management systems, and human
oversight, to mitigate potential harm to health, safety, or fundamental
rights.
3 Transparency Obligations
Transparency is paramount, mandating clear notification of AI
interaction, especially for systems generating synthetic content or
employing emotion recognition and biometric categorization.
4 General-Purpose AI Models
Providers of general-purpose AI models face specific obligations
related to technical documentation, transparency, and risk mitigation,
particularly for models with systemic risk potential.
Definitions and Scope
AI System Definition Scope of the Act Key Roles
The Act defines an AI system based The AI Act applies to providers, The Act clearly defines roles such
on its characteristics, encompassing deployers, importers, and as providers, deployers, importers,
machine learning, logic- and distributors of AI systems within the and distributors, outlining specific
knowledge-based approaches, and EU, as well as those outside the EU responsibilities for each to ensure
statistical methods. This broad whose AI output is used within the accountability and compliance
definition captures a wide range of EU. This extraterritorial reach throughout the AI lifecycle.
AI applications, ensuring highlights the significance and
comprehensive coverage of the global impact of the legislation.
regulatory framework.
Exceptions to the AI Act
Military and Defence
AI systems exclusively used for military, defence, or national security
purposes are exempt, recognizing the specific nature of these
applications. However, any use beyond these purposes brings the
systems under the Act's purview.
Research and Development
AI systems solely for scientific research and development are also
excluded, promoting innovation and experimentation. However, real-
world testing of these systems must adhere to specific guidelines.
Prohibited AI Systems
Exploitative AI Systems that exploit vulnerabilities due to age or
disability.
Social Scoring AI systems used for social scoring, leading to
discriminatory treatment.
Real-time Biometric Identification Real-time biometric identification in public spaces for
law enforcement, subject to specific exceptions.
High-Risk AI Systems:
Definition
High-risk AI systems are those that pose significant risks to health, safety, or
fundamental rights. These include AI systems used in critical infrastructure,
healthcare, law enforcement, and employment. The Act establishes a
specific risk assessment process for these systems.
Identification
1 Identify potential hazards related to the AI system's intended
use.
Analysis
2 Analyse the severity and probability of each hazard occurring.
Evaluation
3 Evaluate the overall risk level based on the analysis and
determine necessary mitigation measures.
Risk Management
1 Identify Hazards
Identify potential risks associated with the AI system,
considering all stages of its lifecycle and potential impact on
fundamental rights.
2 Assess Risks
Evaluate the likelihood and severity of each identified
hazard, considering the specific context of the AI system's
deployment.
3 Implement Mitigation Measures
Implement appropriate measures to mitigate the identified
risks, such as technical safeguards, human oversight, and
data quality controls.
4 Monitor and Review
Continuously monitor the effectiveness of risk mitigation
measures and review the risk management process regularly
to adapt to evolving risks and circumstances.
Quality Management System
The quality management system (QMS) ensures the high-risk AI system's compliance throughout its life cycle. It
includes documented policies, procedures, and instructions covering design, development, testing, data management,
risk management, post-market monitoring, and incident reporting.
Conformity Risk Management Data Governance Post-Market
Assessment Integrated risk Robust data management Monitoring
Processes and procedures management processes processes ensuring data Ongoing monitoring,
to assess and demonstrate throughout the AI system quality, accuracy, and change control and
compliance. lifecycle. security. incident reporting
mechanisms.
Data and Data Governance
High-quality data is essential for trustworthy AI. The Act mandates data
governance measures to ensure data accuracy, completeness, and
representativeness, minimizing bias. Data quality plans assess accuracy,
completeness, conformity, and timeliness of data used for training,
validation, and testing AI systems.
Data Quality
Ensuring data accuracy, completeness, and reliability.
Data Security
Protecting data from unauthorized access and misuse.
Bias Mitigation
Addressing and minimizing biases in data and AI systems.
Accuracy, Robustness, and Cybersecurity
High-risk AI systems must be accurate, robust, and secure. This includes technical and organizational measures for
resilience, backup and redundancy solutions, and cybersecurity protections against threats like data poisoning and
adversarial attacks. Systems that continue to learn post-deployment must mitigate risks of biased outputs.
Accuracy Robustness Cybersecurity
Ensuring the AI system produces Building resilience against failures Protecting against cyber threats and
reliable and correct outputs. and unexpected inputs. data breaches.
Technical Documentation and Recordkeeping
Comprehensive technical documentation is required for high-risk AI systems, covering system design, development,
data used, risk management, and post-market monitoring. Detailed logs must be maintained throughout the AI system's
lifecycle, ensuring transparency and traceability.
System Description Data Information Risk Management
Detailed explanation of the AI Comprehensive information about Documentation of the risk
system's purpose, functionality, the data used to train, validate, management system, including
and architecture. and test the AI system, including identified hazards, risk
data sources, processing assessment methodologies, and
methods, and quality assurance mitigation strategies.
procedures.
Post-Market Monitoring Plan
Description of the plan for ongoing monitoring and data collection after the AI system is deployed.
Transparency Requirements
Transparency is crucial for building trust in AI systems. Users must be
informed when interacting with AI. Systems generating synthetic content
must be clearly labelled. Deployers of limited-risk systems using emotion
recognition or biometric categorization must notify affected individuals.
User Notification
Informing users when they are interacting with an AI system.
Content Labelling
Marking synthetic content as artificially generated.
Transparency Notices
Providing clear information about the use of emotion recognition or
biometric categorization.
General-Purpose AI Models
General-purpose AI models, trained on vast datasets and capable of diverse
tasks, have specific obligations. Providers must maintain technical
documentation, including training and testing processes, and make a
summary of training data publicly available.
1 Technical Documentation
Comprehensive documentation of the AI model's development,
training, and testing processes.
2 Data Summary
Publicly available summary of the data used to train the AI model,
including its source, scope, and characteristics.
3 Risk Mitigation
For models with systemic risk, providers must conduct adversarial
testing and implement robust cybersecurity measures.
General-Purpose AI Models
with Systemic Risk
General-purpose AI models with the potential for widespread societal
impact, termed "systemic risk," have additional obligations. Providers must
evaluate and mitigate systemic risks, report serious incidents to the EU AI
Office, and ensure robust cybersecurity protection.
1 Risk Assessment
Thorough evaluation and mitigation of systemic risks
associated with the AI model.
2 Incident Reporting
Timely reporting of serious incidents and remediation efforts
to the EU AI Office.
3 Cybersecurity
Implementation of strong cybersecurity measures to protect
against unauthorized access, data breaches, and adversarial
attacks.
Innovation Consideration and Regulatory
Sandboxes
To balance regulation with innovation, Member States are required to establish AI regulatory sandboxes. These
sandboxes allow developers to test and validate innovative AI systems in a controlled environment before market
deployment, fostering responsible innovation.
Testing and Validation Collaboration Innovation
Sandboxes offer a safe environment They foster collaboration between Sandboxes promote responsible AI
for testing AI systems and validating developers, regulators, and other innovation by allowing developers to
their compliance with the AI Act. stakeholders, facilitating knowledge experiment with cutting-edge
sharing and best practices. technologies while ensuring
compliance and mitigating potential
risks.
Penalties for Non-Compliance
Non-compliance with the AI Act carries significant penalties. Fines for prohibited practices can reach up to €35 million
or 7% of global annual turnover. Other violations can incur fines up to €15 million or 3% of turnover. Providing inaccurate
information can also result in penalties.
Prohibited Practices Other Violations Inaccurate Information
Up to €35 million or 7% of global Up to €15 million or 3% of global Up to €7.5 million or 1% of global
annual turnover. annual turnover. annual turnover.
Next Steps for Practitioners:
Timeline
The AI Act entered into force on 1 August 2024, with most provisions
applying from 2 August 2026. This timeline allows organizations time to
prepare for compliance, including assessing their AI systems, implementing
necessary safeguards, and updating their policies and procedures.
1 2024
AI Act enters into force.
2 2025
Preparation for compliance, including AI system
assessments and policy updates.
3 2026
Majority of AI Act provisions become applicable.
Data Protection Input and Collaboration
Data protection specialists play a key role in AI Act compliance, given the overlap with data protection regulations.
Collaboration with data protection, technology, compliance and legal teams is crucial for data management, risk
management, compliance management and transparency obligations. Collaboration with stakeholders ensures clear
understanding of AI use expectations. Regular stakeholder collaboration is essential for effective communication.
Data Protection Technology Officer Compliance Officer Risk Manager
Officer Oveasees technical Establishes a compliance Establishes a risk
Provides expert guidance documentation, AI system management system and management system and
on data protection engineering, data monitors adherence to the oversees the management
requirements, conducts management, AI Act. of risk throughout the AI
audits and ensures staff cybersecurity and lifecycle.
are trained. conformity.
AI Inventory and Third-Party Management
Creating an inventory of all AI systems used within the organization is crucial for compliance. This includes identifying
both internally developed and third-party AI systems. Due diligence is essential when selecting AI providers, ensuring
their compliance with the Act. Risk assessments should align with the Act's classification scheme.
Internal AI Systems Third-Party AI Systems Risk Assessment
Documenting and assessing all AI Identifying and assessing all AI Conducting thorough risk
systems developed and deployed systems provided by third-party assessments of all AI systems
within the organization. vendors. based on the Act's risk
categorization framework.
Document AI System Acquisition and Use
Processes
1 2 3
Acquisition Use Monitoring
Define clear procedures for Establish policies and guidelines for Implement ongoing monitoring and
selecting and acquiring AI systems, the appropriate use of AI systems incident reporting mechanisms to
including due diligence and risk within the organization. ensure continued compliance and
assessment. address any issues promptly.
Establish clear processes for acquiring and using AI systems, integrating risk assessment into procurement decisions.
Service-level agreements should address AI use, incident notification, and compliance. Organization-wide policies
should govern permissible AI use cases and escalation procedures. An AI governance committee can provide holistic
oversight.
Key Considerations for AI
Programmes: Trust and
Verification
Always verify the outputs of AI systems for accuracy and potential bias,
recognizing that AI is not infallible. "Trust but verify" should be a guiding
principle in AI deployment.
1 Accuracy Checks
Implement mechanisms to validate the accuracy of AI-generated
outputs.
2 Bias Detection
Utilize tools and techniques to identify and mitigate potential biases
in AI systems.
3 Human Review
Incorporate human review and oversight to ensure the reliability and
trustworthiness of AI-generated results.
Existing Compliance Requirements
Consider existing compliance requirements, particularly data protection regulations like GDPR, when deploying AI
systems. AI systems often process personal data, necessitating adherence to privacy principles and lawful data
processing practices.
Data Minimization Purpose Limitation Data Security
Collect and process only the Use personal data only for the Implement appropriate security
minimum amount of personal data specified purpose for which it was measures to protect personal data
necessary for the specific AI collected. from unauthorized access, use, or
purpose. disclosure.
Revising Existing Policies
Update existing acceptable use policies to address AI-specific
considerations, or create new policies if necessary. Ensure alignment with
the AI Act and the organization's risk appetite regarding AI deployment.
Acceptable Use
Define permissible and prohibited use cases for AI within the
organization.
Data Handling
Specify guidelines for collecting, storing, and processing data used by
AI systems.
Risk Management
Outline procedures for assessing and mitigating risks associated with AI
deployment.
Transparency
Establish transparency requirements for AI systems and communication
with users.
Adapting Cybersecurity and
Privacy Policies
Integrate security by design and privacy by design principles into AI system
development. Address security and privacy considerations upfront when
partnering with third-party AI providers to mitigate potential risks and ensure
compliance.
Security by Design
Incorporate security considerations throughout the entire AI lifecycle.
Privacy by Design
Embed privacy principles into the design and development of AI systems.
Third-Party Risk Management
Assess and manage security and privacy risks associated with third-party AI
providers.
Promoting AI Literacy
Train employees on AI technologies, risk, and ethical considerations. Tailor
training to specific job roles and departmental needs to ensure effective
understanding and implementation of the AI Act requirements.
1 AI Fundamentals
Provide basic education on AI concepts, technologies, and
applications.
2 Risk Awareness
Train employees on the potential risks associated with AI, including
bias, discrimination, and privacy violations.
3 Ethical Considerations
Educate employees on the ethical implications of AI and promote
responsible AI practices.
Designating an AI Lead
Appoint a dedicated AI lead responsible for overseeing AI initiatives, tracking AI tools in use, and ensuring compliance
with the AI Act. This individual should collaborate closely with cybersecurity, privacy, legal, procurement, risk, and audit
teams.
Oversight Compliance Collaboration
Oversee all AI-related activities Ensure compliance with the AI Act Facilitate collaboration between
within the organization. and other relevant regulations. different teams involved in AI
development and deployment.
Cost Analysis and Audits
Cost Analysis Audits Traceability
Evaluate the financial implications of Conduct regular audits to ensure Maintain clear documentation and
implementing and maintaining AI transparency and compliance with records of AI system development,
systems. the AI Act. data sources, and decision-making
processes.
Developing AI Ethical
Guidelines
Establish and document organization-wide ethical guidelines for AI use,
addressing issues like data privacy, bias, and societal impact. Review and
update these guidelines regularly to adapt to evolving AI capabilities and
societal expectations.
1 Data Privacy
Establish clear guidelines for protecting personal data used by AI
systems.
2 Bias Mitigation
Develop strategies to address and minimize biases in AI systems and
their outputs.
3 Societal Impact
Consider the broader societal implications of AI use and promote
responsible AI practices.
Considering Societal Impacts
Evaluate the potential societal impacts of AI deployment, such as job displacement and the spread of misinformation.
Engage in responsible AI practices and contribute to public discourse on the ethical and societal implications of AI
technologies.
Job Displacement Misinformation Ethical Debate
Analyze the potential impact of AI Implement measures to combat the Participate in public discussions and
on employment and develop spread of misinformation and contribute to the development of
strategies to mitigate negative deepfakes generated by AI systems. ethical frameworks for AI.
consequences.
Conclusion: Ongoing Adaptation
The EU AI Act provides a comprehensive framework for responsible AI development and deployment. Compliance
requires ongoing adaptation to evolving AI capabilities and societal expectations. Staying informed about AI
advancements, regulatory updates, and ethical considerations is crucial for maximizing the benefits of AI while
mitigating its risks.
1 2 3
Information Adaptation Collaboration
Stay informed about the latest AI Continuously adapt AI policies and Foster collaboration and knowledge
advancements and regulatory practices to align with evolving sharing within the organization and
updates. requirements and ethical with external stakeholders.
considerations.