Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
35 views20 pages

Blockchain For Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views20 pages

Blockchain For Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

International Cybersecurity Law Review (2024) 5:1–20

https://doi.org/10.1365/s43439-023-00107-9

ORIGINAL PAPER

Blockchain for Artificial Intelligence (AI): enhancing


compliance with the EU AI Act through distributed
ledger technology. A cybersecurity perspective

Simona Ramos · Joshua Ellul

Received: 9 August 2023 / Accepted: 1 December 2023 / Published online: 25 January 2024
© The Author(s) 2024

Abstract The article aims to investigate the potential of blockchain technology


in mitigating certain cybersecurity risks associated with artificial intelligence (AI)
systems. Aligned with ongoing regulatory deliberations within the European Union
(EU) and the escalating demand for more resilient cybersecurity measures within the
realm of AI, our analysis focuses on specific requirements outlined in the proposed
AI Act. We argue that by leveraging blockchain technology, AI systems can align
with some of the requirements in the AI Act, specifically relating to data governance,
record-keeping, transparency and access control. The study shows how blockchain
can successfully address certain attack vectors related to AI systems, such as data
poisoning in trained AI models and data sets. Likewise, the article explores how
specific parameters can be incorporated to restrict access to critical AI systems,
with private keys enforcing these conditions through tamper-proof infrastructure.
Additionally, the article analyses how blockchain can facilitate independent audits
and verification of AI system behaviour. Overall, this article sheds light on the
potential of blockchain technology in fortifying high-risk AI systems against cyber
risks, contributing to the advancement of secure and trustworthy AI deployments.
By providing an interdisciplinary perspective of cybersecurity in the AI domain, we
aim to bridge the gap that exists between legal and technical research, supporting
policy makers in their regulatory decisions concerning AI cyber risk management.

Keywords Policy · Regulation · AI Act · DLT

 Simona Ramos
Department of Information and Communication Technologies Engineering (ETIC), University
Pompeu Fabra, Barcelona, Spain
E-Mail: [email protected]
Joshua Ellul
Centre for Distributed Ledger Technologies, University of Malta, Msida, Malta
E-Mail: [email protected]

K
2 International Cybersecurity Law Review (2024) 5:1–20

1 Introduction

As we write this article, it is evident that there are ever-increasing advancements in


artificial intelligence (AI) technologies, a rapid adoption of AI-based products and
services and national efforts to provide safeguards against negative consequences of
AI. The European Commission’s Communication Report defines AI as: “Artificial
intelligence (AI) refers to systems that display intelligent behaviour by analysing
their environment and taking actions—with some degree of autonomy—to achieve
specific goals” [17]. The International Data Corporation, a market intelligence firm,
estimates that the worldwide AI market will reach a compound annual growth rate
(CAGR) of 18.6% in the 2022–2026 period, peaking at 900 billion dollars in 2026
[22].
Beyond AI’s potential, it is also a prominent example of a technology where
cyber risks are becoming an alarming threat [51]. As adversarial actors are actively
acquiring knowledge and skills to enhance the efficacy of their attacks, AI technol-
ogy is becoming a focal point of attack due to its ever-increasing economic and
social significance. Whilst AI systems are susceptible to attacks that are commonly
encountered by traditional software, they are also vulnerable to specific attacks that
aim to exploit their unique architectures based on knowledge of how such AI mod-
els operate. Furthermore, in AI systems, data can be weaponized in novel ways,
necessitating changes in data collection, storage, and usage practices [8].
In response to such cyber threats, the European Union Agency for Cybersecurity
(ENISA) has recently released a report that delineates the prevailing cybersecurity
and privacy threats, as well as vulnerabilities inherent in AI use cases [14]. The
analysis primarily concentrates on the identification of threats and vulnerabilities
associated with machine learning techniques, while also considering broader as-
pects of AI systems. The field of AI presents several unresolved challenges that
necessitate further research including: attaining verifiability, reliability, explainabil-
ity, auditability, robustness, and unbiasedness in AI systems.
Additionally, the quality of datasets emerge as a critical concern, as: (a) the
maxim “garbage in/garbage out” highlights the requirement for high-quality inputs
to yield satisfactory outputs; and (b) unwanted biases could emerge due to un-
balanced datasets1. These issues are listed as open research questions by ENISA,
alongside the need for designing more attack resilient AI systems. The regulatory
concern over AI cyber risks was also noted in 2020, with the release of the docu-
ment on the EU’s Cybersecurity Strategy for the Digital Decade [15], maintaining:
“Cybersecurity must be integrated into all these digital investments, particularly key
technologies like Artificial Intelligence (AI), encryption and quantum computing,
using incentives, obligations and benchmarks”. The need for improved cybersecu-
rity measures in AI systems extends beyond the European Union. The Center for
Security and Emerging Technologies in the United States has also underscored the
urgency for policymakers to swiftly and efficiently address potential avenues for
reducing cyber vulnerabilities in the realm of AI [29].

1 See section on Open Issues and Challenges under the Artificial Intelligence and Cybersecurity Research

Report, ENISA (2023).

K
International Cybersecurity Law Review (2024) 5:1–20 3

The proposed AI Act by the European Union seeks to establish a comprehensive


regulatory framework for AI systems, with a primary focus on addressing ethical
and legal considerations yet it also recognizes and emphasises the significance for
cybersecurity within AI systems2. During the same period the AI Act was being
discussed and developed, blockchain was posing similar techno-regulatory concerns
around the World particularly due to its use in cryptocurrencies for which technol-
ogy-focused regulation was proposed and eventually the EU’s markets in crypto-
assets (MiCA) regulation was passed [12]. While aspects of blockchain and other
distributed ledger technologies (DLT), particularly their decentralised nature and
immutable offerings, have posed challenges to regulators, we see their potential to
fill compliance and risk gaps the AI Act leaves. We herein suggest how blockchain
affordances can be used to mitigate certain AI-related cyber issues increasing the
overall security of AI-based systems. In this article we examine how blockchain
and DLT can enhance compliance with the EU AI Act and further reinforce cyber
security measures.
In a widely complex eco-system such as AI and cybersecurity, academic literature
has been generally focused on either the technical or purely legal aspects, creating
an interdisciplinary gap that requires further attention. On the technical side, con-
siderable attention has been devoted to exploring the diverse range of cybersecurity
challenges associated with AI models [1, 25–27]. A plethora of studies have been
conducted to delve into the technical aspects and vulnerabilities that arise in AI sys-
tems [33, 50]. Studies have investigated various dimensions of AI security, aiming
to identify potential attack vectors and develop effective defence mechanisms. It is
worth noting that this field, like the development of the technology itself, is highly
dynamic and continuously evolving. As attack techniques are becoming increas-
ingly complex and sophisticated, there is a need for ongoing research to uncover
new vulnerabilities and develop robust countermeasures.
On the regulatory side, several studies explore the connection between AI and
cybersecurity. For example, a study by Andraško et al. (2021) analyses the regula-
tory intersections between AI, data protection and cyber security within the EU legal
framework [4]. Biasin and Kamenjasevié (2022) [6] examine cybersecurity issues of
medical devices from a regulatory standpoint, underlining novel challenges arising
from the AI Act and NIS 2 Directive proposals. Comiter (2019) [8] has highlighted
the disconnect between cyber policy and AI systems. The author asserts that effec-
tively addressing cyber issues associated with AI necessitates novel approaches and
solutions that should be explicitly incorporated into relevant regulations. In a simi-
lar direction, a study by Mueck et al. (2023) [34] examines the upcoming European
Regulations on Artificial Intelligence and Cybersecurity and provide an overview
on the status of related policy actions related to cyber regulation in AI. Ellul (2022)
[12] argues that regulation should not be AI-specific but focused on software used
in specific sectors and activities, and later, Ellul et al. (2023) [13] propose the need
for techno-regulatory solutions to support software (and AI) related regulation.

2 See Article 15 of the Proposal for a Regulation of the European Parliament and of the Council Lay-

ing Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final
(April 2021).

K
4 International Cybersecurity Law Review (2024) 5:1–20

When it comes to the intersection between blockchain and AI, Shinde et al.,
(2021) [43] have performed a bibliometric and literature analysis of how blockchain
provides a security blanket to AI-based systems. Likewise, Mamoshina et al. (2017)
[31] review emerging blockchain applications specifically targeting the AI area. They
also identify and discuss open research challenges of utilising blockchain technolo-
gies for AI. Furthermore, the authors converge blockchain and next-generation AI
technologies as a way to decentralise and accelerate biomedical research and health-
care. Short et al. (2020) [44] examine how blockchain based technologies can be
used to improve security in federated learning systems.
To the best of our knowledge, there is a lack of research examining whether
blockchain could serve as a tool for achieving compliance with legal AI cybersecurity
requirements. In line with Ellul et al. (2023) [13], who maintain that the problem of
technology regulation can be also addressed through the use of technology itself, in
the following paragraphs we aim to examine how blockchain can be used to mitigate
certain cybersecurity risks and attacks related to high-risk AI systems and to what
extent these measures meet some of the cyber requirements positioned in the AI
Act.
More specifically, we propose that blockchain can (a) mitigate certain cyber
attacks, such as data poisoning in trained AI models and datasets. Likewise, by em-
ploying decentralised infrastructure and blockchain technology, (b) AI systems can
benefit from cryptographically-secured guardrails, reducing the likelihood of misuse
or exploitation for adversarial purposes. Furthermore, we explore (c) how developers
can restrict AI’s access to critical infrastructure through tamper-proof decentralised
infrastructure such as blockchains and smart contracts. Additionally, we examine
(d) how blockchain can enable secure and transparent data sharing mechanisms
through decentralised storage, augmenting data integrity and immutability in AI
systems. Furthermore, we analyse (e) how blockchain facilitates independent audits
and verification of AI systems, ensuring their intended functionality and mitigating
concerns related to bias and malicious behaviour.
By leveraging blockchain technology, AI systems can align with some of the
requirements mandated in the AI Act, specifically in terms of data, data governance,
record-keeping, transparency and access control. Blockchain’s decentralised and
tamper-proof nature helps address some of these requirements, providing a potential
foundation for accountable and trustworthy AI systems. Through this research, this
article sheds light on the potential of blockchain technology in fortifying high-
risk AI systems against cyber risks, contributing to the advancement of secure and
trustworthy AI deployments (both in the EU and beyond) and to guide policy makers
in their decisions concerning AI cyber risk management.
The rest of the paper is organised as follows: in Sect. 2, we provide a general
overview of the cybersecurity risks in AI systems emphasising attack vectors relevant
for our analysis. In Sect. 3, we touch upon the AI Act and cybersecurity. In Sect. 4,
we delve into analysing the application of blockchain as a cybersecurity tool in
mitigating certain cyber risks of AI, in parallel with some of the requirements of the
AI Act. In Sect. 5 we present some closing thoughts before concluding the article.

K
International Cybersecurity Law Review (2024) 5:1–20 5

2 AI: security vulnerabilities and attack vectors

Under the hood, AI systems typically make use of machine learning, logic based
reasoning, knowledge driven approaches, target-driven optimisation (given some
fitness function), or some other form of statistical technique. Indeed, the definition
of AI has been debated for decades and it is not the intention of this paper to add
to this debate, and neither support a particular definition of AI or what should be
classified as AI or not. Yet, we discuss solutions that blockchain can pose for many
types of AI systems (and potentially all systems depending upon one’s definition of
AI).
Many such AI systems have the capability to operate within the realm of human-
defined objectives, generating a spectrum of outputs that exert profound influence
over the environments they interact with for example consider AI algorithms used
to moderate, filter, and promote different content which can sway the public nar-
rative. Through their intrinsic computational prowess, AI systems can manifest as
tools for generating high-quality content, making accurate predictions, offering per-
sonalised recommendations, and rendering impactful decisions. If done right, these
outputs possess the potential to reshape industries, optimise processes across a broad
spectrum of domains and affect the fabric of society [45].
Upon collecting information, AI system engineers need to develop into such
systems a profound process of interpretation, potentially leveraging vast knowledge
repositories to extract meaning, identify patterns, and draw insights from the past
data and/or the data at hand. Armed with this synthesised understanding, they are
used to perform intricate reasoning, whilst contemplating a multitude of factors,
associations, and dependencies to arrive at informed decisions. By integrating logical
frameworks, probabilistic reasoning, and pattern recognition techniques, AI systems
possess the aptitude to unravel complex problems, devise innovative strategies, and
chart a course of action tailored to achieving their prescribed goals [10, 30].
However, AI systems are not impervious to vulnerabilities or weak points, as
they can be targeted by various means, including attacks that exploit their inherent
architecture, limitations, or weaknesses [26]. These attacks can encompass a wide
range of techniques, targeting underlying algorithms, data inputs, which may even
involve exploiting physical components connected to AI systems. The susceptibil-
ity of AI systems particularly arises from their complex and interconnected nature,
which creates many opportunities for adversaries to exploit potential weaknesses in
their design, implementation, or deployment. In certain situations, AI systems may
need specific cybersecurity defence and protection mechanisms to combat adver-
saries [26]. While one cannot ensure a fully secure AI system [51], in the following
sections we take a close look at some prevalent cybersecurity risks concerning AI
systems and how they can be mitigated with the help of blockchain technology.

2.1 AI attack vectors: data and humans

This article does not aim to provide a comprehensive overview of all AI cyber
attacks, as it is a complex and extensive topic that warrants volumes of literature.
Yet, we will focus on specific vulnerabilities and threats, for which blockchain can

K
6 International Cybersecurity Law Review (2024) 5:1–20

be a useful tool. In particular, we discuss data and human factors as potential attack
vectors that can be exploited to target AI systems. The explanations provided are
not exhaustive but serve as illustrative examples to enhance readers’ understanding
in the second part of the article.

2.1.1 Data-focused attacks

Input attacks involve manipulating inputs that will be fed into an AI system in
aim of achieving the attacker’s desired outcome to alter the system’s output [8].
Since AI systems function (like ‘machines’) that take input, perform computations,
and generate output, manipulating the input can enable attackers to influence the
system’s output. The importance that data plays throughout the lifecycle of such
systems cannot be overestimated, from the building and validation of such systems
to its live operation, it is at the core of the learning process of machine learning
models. One of the most prevalent input attack vectors involves poisoning (i.e.
manipulating) data utilised to train such models [2, 46]. Data poisoning attacks
are a major concern in AI cybersecurity as they can cause substantial damage that
can lead to undesirable socio-economic consequences. Consider a scenario where
a public sector AI system is used to calculate levels of social help that should be
given to (poor) families. Then consider that an attacker could poison the data so
that the system delivers a result that particular types of families are not entitled to
support.
Likewise, consider an attack scenario where the attacker has gained access to
the training data and is able to manipulate it, such as incorrect labels or biased
information. This attack leverages the vulnerability of machine learning models to
the quality and integrity of training data. If the attacker can inject poisoned data
that influences the model’s learning process, they can alter its decision boundaries
and compromise its performance [52]. Data poisoning attacks can occur at different
stages, including during data collection, processing, or labelling. Adversaries may
use various techniques, such as injecting biased samples, modifying existing data
points, or even tampering data within the training pipeline itself [37]. Arguably, data
is the “water, food and air of AI”—and therefore, poisoning the data, one can attack
the whole (or most) of an AI system [8].
Another similar form of attack targets deep neural networks3. Here, the attacker
introduces subtle modifications in an attempt to manipulate the AI system’s predic-
tions. For example, attacks such as projected gradient descent (PGD) and square
attack exploit the model’s sensitivity to small and carefully crafted perturbations in
the input data, causing the deep neural networks to produce false predictions [49].
As noted, data alterations can be carefully designed to deceive the system, causing
it to produce incorrect or biased results. These attacks can be challenging to detect,
especially if the modifications are carefully designed to evade detection mechanisms

3 Deep neural networks (DNNs) are a crucial component of the artificial intelligence (AI) landscape due to
their ability to perform complex tasks such as object detection, image classification, language translation,
etc.

K
International Cybersecurity Law Review (2024) 5:1–20 7

Table 1 Description of types of data-focused attack


Attacks Explanation
Evasion attacks [48] Manipulating input data to bypass detection or classification systems, enabling
malicious content to go undetected
Data injection at- Inserting malicious or specially crafted data into an AI system to exploit vulner-
tacks [47] abilities or trigger unintended behaviours
Sensor attacks [8] Similar to the evasion attack. Here an attacker is manipulating input signals
from sensors (e.g., cameras, microphones) to deceive AI systems relying on
sensory input, such as in autonomous vehicles or security systems
Concept drift attacks Introducing gradual changes in input data distribution to cause the AI system to
[3] make erroneous predictions or fail to adapt to new scenarios

or maintain normal system functioning in non-attack scenarios. A non-exhaustive


list of data-focused attacks is presented in Table 1 below.

2.1.2 Human-focused attacks

Attackers may attempt to manipulate or deceive individuals with access to the sys-
tem, such as administrators or users, into revealing sensitive information, sharing
credentials, or performing actions that compromise the system’s security. Likewise,
developers play a key role in building, maintaining, and securing AI systems. Devel-
opers typically have privileged access to underlying code, infrastructure, datasets,
and configuration settings of AI systems. They possess the technical knowledge and
expertise required to modify, update, and maintain such systems. However, their ac-
cess also presents a potential vulnerability that can be exploited by malicious actors
through various means including social engineering.
Consider a code alteration type of attack, where a malicious party gains access
and a modification is made to the code of an AI system (which may include model
parameters) in order to manipulate its behaviour or achieve malicious objectives4.
While this could be also said for other types of systems, one of the main differences
(between traditional systems and AI-based systems) is that such changes may result
in system behaviour that still seems to be correct. Also, code alterations in high-
risk AI systems, can have detrimental consequences to users and society in general.
For example, an autonomous driving system relies on computer vision algorithms to
detect traffic signs. In a code alteration attack, an attacker could modify the source
code responsible for sign recognition to deliberately misclassify stop signs as yield
signs. This alteration could lead to potentially dangerous situations on the road, as
the autonomous vehicle may not respond correctly to the altered signs.
This brings to light the importance of access control protection for developers
and other important stakeholders as an essential security measure. Isaac et al. [24]
maintain that if developers’ access is not properly protected, attackers may gain
unauthorised access to their accounts or exploit their privileges to modify the code,
inject malicious components, or introduce vulnerabilities in the AI system. More-

4 These attacks typically target the underlying algorithms, configurations, or functionality of the AI sys-
tem.

K
8 International Cybersecurity Law Review (2024) 5:1–20

over, developers often have access to sensitive data used in AI systems. Inadequate
access controls can expose this data to unauthorised access or increase the risk of
data theft, leading to breaches of confidentiality and potential harm to individuals
or organisations.

3 The AI ACT and cybersecurity

Following the European Commission’s release of its long-awaited proposal for an


AI regulatory framework in April 2021 [16], there has been notable progress among
EU institutions and lawmakers in establishing the EU Artificial Intelligence Act
(hereafter: AI Act). The AI Act aims to fulfil the commitment of EU institutions
to present a unified European regulatory framework addressing the ethical and so-
cietal dimensions of AI. Once enacted, the AI Act will have binding effects on all
27 EU Member States, marking a significant milestone in the regulation of AI at the
European level5.
While the AI Act primarily focuses on ethical and legal aspects of AI, it also
addresses the importance of cybersecurity in AI systems. In relation, the AI Act
emphasises the need for AI systems to be designed and developed with cyberse-
curity in mind6. It requires that AI systems incorporate appropriate technical and
organisational measures to ensure their security and resilience against cyber threats.
For example, the AI Act mandates that AI developers and deployers conduct thor-
ough risk assessments to identify potential cybersecurity risks associated with their
systems [40]. Based on the risk assessment findings, organisations are required to
implement appropriate mitigation measures to reduce the identified risks and en-
hance the cybersecurity posture of the AI system.
Furthermore, the AI Act recognizes the importance of data security in AI systems.
It requires that personal and sensitive data used by AI systems be adequately pro-
tected against unauthorised access, disclosure, alteration, and destruction. The Act
also promotes the use of privacy-enhancing technologies to safeguard data privacy
and confidentiality. Furthermore, it emphasises the importance of transparency and
explainability in AI systems, which includes cybersecurity aspects. It requires that
AI systems be designed in a way that allows auditors and regulators to assess the sys-
tem’s security measures, including cybersecurity controls, to ensure compliance with
regulatory requirements. In the event of a cybersecurity incident or breach involving
an AI system, the AI Act requires incident reporting to the relevant authorities7. It
also encourages cooperation and information sharing among stakeholders to address

5 Nonetheless, it remains unclear when the AI Act will come into force, given anticipated debate over

a number of contentious issues, including biometrics and foundation models.


6 See Article 15 of the Proposal for a Regulation of the European Parliament and of the Council laying

down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial
Intelligence Act), COM(2021) 206 (April 21, 2021).
7 Procedures related to the reporting of serious incidents and of malfunctioning in accordance with Arti-

cle 62 of the proposed AI Act.

K
International Cybersecurity Law Review (2024) 5:1–20 9

and mitigate cybersecurity risks collectively8. The AI Act introduces a voluntary AI


conformity assessment framework, which may include cybersecurity criteria. The
framework allows AI systems to obtain certification to demonstrate their compli-
ance with the Act’s requirements, including cybersecurity measures9. The AI Act
designates supervisory authorities responsible for overseeing compliance with the
Act’s provisions, including cybersecurity requirements. The authorities will have the
power to audit, assess, and enforce compliance with the measures outlined in the
Act—aspects of the approach have similarities to what was proposed by the Malta
Digital Innovation Authority [13].
The AI act categorises AI systems into four risk levels: unacceptable risk10, high
risk, limited risk, and minimal risk. Each category is subject to specific regulatory
requirements, determined by the potential harm they may cause to individuals and
society. The proposal clarifies the scope of high-risk systems by adding a set of
requirements. AI systems listed in Annex III of the AI Act shall be considered high-
risk if they pose a “significant risk” to an individual’s health, safety, or fundamental
rights. For example, high risk AI systems listed in Annex III include those used
for biometrics; management of critical infrastructure; educational and vocational
training; employment, workers management and access to self-employment tools;
access to essential public and private services (such as life and health insurance);
law enforcement; migration, asylum and border control management tools; and the
administration of justice and democratic processes [18]. With the goal of diminishing
risks and cutting down expenses related to risk reduction strategies, our focus in this
article centres primarily on the high-risk category. This specific category not only
holds significance but also offers an avenue for leveraging supplementary measures,
like blockchain-based tools.
It is worth noting that, the AI Act and the NIS 2 (Network and Information
Systems) Directive share significant commonalities in terms of cyber security re-
quirements. Both the AI Act and the NIS 2 Directive adopt a risk management ap-
proach to cybersecurity. They emphasise the importance of identifying and assessing
risks associated with AI systems and critical information infrastructure, respectively.
Furthermore, both frameworks impose obligations on relevant stakeholders to ensure
the security of their systems. The AI Act requires AI developers and deployers to
incorporate appropriate technical and organisational measures to ensure the security
and resilience of their AI systems. Similarly, the NIS 2 Directive mandates operators
of essential services and digital service providers to implement robust cyber security
measures to protect critical infrastructure. Likewise, both frameworks designate su-
pervisory authorities responsible for overseeing compliance with their cybersecurity
provisions. These authorities have the power to audit, assess, and enforce compli-
ance with the requirements outlined in the AI Act and the NIS 2 Directive. Among
other things, their role is to ensure that relevant stakeholders adhere to robust cyber-
security practices and measures. In addition, ENISA recently released a report on
providing an overview of standards (existing, being drafted, under consideration and

8 See Title 8, Chap. 1 of the AI Act.


9 See Article 42 on Presumption of conformity with certain requirements of the AI Act.
10 Under the AI Act, AI systems that carry “unacceptable risk” are per se prohibited.

K
10 International Cybersecurity Law Review (2024) 5:1–20

planned) related to the cybersecurity of AI, assessing their coverage and identifying
gaps in standardisation [9]. The report examines the role of cyber security within
a set of requirements outlined by the AI Act such as data, data governance, record
keeping, risk management, etc.
Overall, the AI Act recognizes the significance of cybersecurity in AI systems
and establishes measures to ensure their resilience against cyber threats. By incor-
porating cybersecurity requirements, risk assessment and mitigation, data security,
transparency, incident reporting, and compliance mechanisms, the AI Act aims to
promote the safe and secure deployment of AI technologies in the European Union.

4 Blockchain for AI: a tool to achieve compliance with cyber and data
security requirements under the EU AI act

4.1 Data integrity and immutability

Data integrity and immutability are critical aspects of ensuring the reliability, se-
curity and trustworthiness of AI systems. The AI Act highlights the significance
of employing high-quality training data, unbiased datasets, and ensuring that AI
systems are not trained on discriminatory or illegal data. The Act states that data
quality should be reinforced by the use of tools that verify the source of data and the
integrity of data (i.e. to prove that data has not been manipulated). It also underlines
that access to data should be limited to those specifically positioned to access it.
Article 15 of the AI Act argues for the implementation of “technical solutions to
address AI specific vulnerabilities including, where appropriate, measures to prevent
and control for attacks trying to manipulate the training dataset (‘data poisoning’),
inputs designed to cause the model to make a mistake (‘adversarial examples’), or
model flaws”.
Blockchain technology offers a robust solution to address these concerns by
providing a decentralised and tamper-resistant ledger for securely transferring, stor-
ing and verifying data [53]. Indeed, it must be noted that data stored in a public
blockchain implies that the data would be available for anyone to see, yet various
different techniques may be adopted to both: (i) ensure data is kept private (and not
directly stored on a public blockchain); and (ii) ensure data integrity can be upheld
(through storing cryptographic hashes of data on a blockchain).
Blockchain’s immutability feature mitigates these risks by creating a permanent
record of data transactions that cannot be altered or tampered with. When data is
recorded on the blockchain, it is stored across all nodes in the network, forming
a decentralised and synchronised ledger. New data, such as the addition or modifi-
cation of training data, is cryptographically linked to previous transactions, creating
a chain of blocks that is resistant to modification [38]. This ensures that once data
is added to a blockchain, it becomes impossible or infeasible to alter or manipulate
without the consensus of the network participants. Any attempts to tamper with the
data would require significant computational power and/or consensus among the
majority of network participants, making it economically and practically infeasible.
Furthermore, applications digitally sign data transmitted to a blockchain, and there-

K
International Cybersecurity Law Review (2024) 5:1–20 11

fore it would be possible for an application to verify whether any data the application
itself has submitted has since been manipulated.
These features of immutability and verifiability can further help applications to
comply with the AI Act’s proposition regarding incorporating ‘logs’ in AI-based
systems. The EU emphasises the need of having high-risk AI systems designed and
developed with capabilities enabling the automatic recording of events (logs) during
operation of such systems11.
By leveraging blockchain for data integrity, AI systems can maintain a reliable and
verifiable record of training data used—indeed, as discussed, consideration would
need to be given with respect to the type of blockchain used (public/permissioned/
hybrid) and the extent to what data is stored on the blockchain (e.g., raw data on-
chain, or cryptographic hashes on-chain with off-chain raw data, or some other
suitable configuration). To further emphasise the point, blockchains can facilitate
data provenance, date, and time of recording and other characteristics. This can also
enable transparency and trust in data sources and provide a means to verify that
AI models are trained on accurate and untampered data. As discussed, indeed, the
characteristics of blockchain technology align with several requirements outlined
in the AI Act, specifically in relation to data and data governance, record-keeping,
transparency, and the provision of information to users.
It is important to note that while blockchains ensure data integrity and immutabil-
ity, they do not guarantee the quality or accuracy of the data itself. Blockchain tech-
nology can provide assurances that the data has not been tampered with, but it does
not address the issue of data bias, incompleteness, or representativeness. Ensuring
the quality and reliability of the data used for training AI systems remains a separate
challenge that requires additional research.

4.2 Data sharing

According to the AI Act: “European common data spaces established by the Com-
mission and the facilitation of data sharing between businesses and with govern-
ment in the public interest will be instrumental to provide trustful, accountable and
nondiscriminatory access to high quality data for the training, validation and testing
of AI systems”.12 Moreover, the AI ACT maintains that in order to facilitate the
development of high-risk AI systems, specific actors, including digital innovation
hubs, testing experimentation facilities, researchers, experts etc. should have access
to and utilise high-quality datasets within their relevant fields of activities, as per
the guidelines set by this Regulation. In relation, secure data sharing and storing
can become critical concerns when it comes to collaborative AI training systems
involving multiple parties.
Blockchain technology can provide solutions that enable secure data sharing
among parties, facilitating collaboration while maintaining data privacy to a certain
extent. Although still developing, the field of privacy preserving blockchain solu-
tions is on the rise. Bernabe et al. [5] discuss novel privacy-preserving solutions

11 See Article 12 of the AI Act.


12 See paragraph 45 of the AI Act.

K
12 International Cybersecurity Law Review (2024) 5:1–20

for blockchains, where users can remain anonymous and take control of their per-
sonal data following a self-sovereign identity (SSI) model. Moreover, Dusk network
leverages zero-knowledge technologies to allow for transactions on the blockchain
to benefit from confidentiality [11]. In other words, the network acts like a public
record with smart contracts that store relevant information in a confidential fashion,
thus solving the shortcomings of similar platforms, such as Ethereum. Furthermore,
Galal and Youssef [21] build a publicly verifiable and secrecy preserving blockchain
based auction protocol to address privacy concerns.
Blockchain along with secure multiparty computation (MPC) techniques can be
used to allow multiple entities to collectively train AI models while keeping their
individual data private whilst at the same time providing guarantees with respect to
future verifiability of the data such models were trained on. MPC enables computa-
tion on encrypted data, ensuring that no participant gains access to another party’s
sensitive information [7, 28]. In this case, the blockchain serves as a trusted inter-
mediary that orchestrates the computation and provides guarantees in respect to the
integrity of the training process.
Likewise, through the use of smart contracts, the rules and protocols for data
sharing and collaborative training can be defined and enforced on the blockchain
[35]. Smart contracts could be used to specify the conditions under which data can be
accessed, processed, and shared among the participating entities—yet it is important
to note that control to accessing such data needs to be handled by a centralised
component (since all data on a public blockchain is publicly available). This can
help ensure that data sharing occurs in a controlled and auditable manner, promoting
transparency and trust among participants. By leveraging blockchain for auditable
data sharing, participants can retain ownership and more control over their data
(stored off-chain) while still able to benefit from collective intelligence and insight
gained through collaborative AI training.
The InterPlanetary File System (IPFS) is a distributed file system that provides
a decentralised approach to storing and sharing files across a network [23]. It enables
secure and efficient content addressing, making data retrieval resilient to censorship
and data corruption in a public manner, i.e. all data is publicly available. IPFS uses
content-addressable storage, which ensures that files are uniquely identified by their
content rather than their location, thus enabling tamper-resistant data sharing since
any change in content would result in a different file address (the address and the
content are intimately linked).
Overall, decentralised data sharing aligns with the principles and objectives out-
lined in the AI Act by promoting transparency and accountability. The AI Act places
importance on data protection and security. Decentralised data sharing can enhance
data tamper-proofness by utilising cryptographic techniques, access controls, and
distributed storage mechanisms. By distributing data across a network of nodes, de-
centralised systems reduce the risk of a single point of failure. Moreover, the AI Act
emphasises the rights of individuals regarding their data and the necessity for ob-
taining explicit user consent. Decentralised data sharing aligns with these principles
by giving users greater control over their data. Through decentralised technologies
like blockchain, users can directly manage and grant access to their data, ensuring
that their consent is obtained and that they have a say in how their data is used

K
International Cybersecurity Law Review (2024) 5:1–20 13

yet the actual storage providers (whether centralised or decentralised) must still be
trusted to release data only when such blockchain-based access control policies are
followed.
Furthermore, the AI Act emphasises the ethical implications of AI systems, in-
cluding fairness, accountability, and non-discrimination. Decentralised data sharing
can support these ethical considerations by enabling collective decision-making,
facilitating consensus, and transparent governance models [36]. These features pro-
mote fairness, accountability, and can help prevent discriminatory practices in data
sharing and AI system development since the actual development and learning pro-
cesses become more open and democratised. Likewise, the AI Act promotes inter-
operability and data portability to foster competition and innovation. Decentralised
data sharing can facilitate interoperability by enabling different AI systems to access
and utilise data from various sources in a standardised and tamper-proof manner. It
may also facilitate data portability, as users can easily share their data across differ-
ent platforms or services without being locked into a specific provider’s ecosystem
provided that standardised interfaces or means of connecting such different systems/
data models are made available.

4.3 Auditing and accountability

Auditing and accountability are crucial aspects in ensuring the responsible and eth-
ical deployment of AI systems [32]. Many of today’s AI systems are closed-source.
Without access to the code and algorithmic details, it becomes hard-to-impossible/
infeasible to identify whether biases exist within models. Moreover, without ac-
cess to source code, external entities such as experts, auditors, or regulatory bodies
face challenges in conducting thorough audits or assessments of a system’s fair-
ness, bias, or potential vulnerabilities. Likewise, code alterations and data poisoning
attacks might be harder to detect in closed systems.
The AI Act states the obligation for ex ante testing, risk management and human
oversight to minimise the risk of erroneous or biased AI-assisted decisions in critical
areas such as education and training, employment, important services, law enforce-
ment, and the judiciary13. The proposed regulation puts a high importance on both
audit and transparency. For example, under Annex 7 the document states that: “the
body shall carry out periodic audits to make sure that the provider maintains and
applies the quality management system and shall provide the provider with an audit
report. In the context of those audits, the notified body may carry out additional tests
of the AI systems for which an EU technical documentation assessment certificate
was issued.”
The AI Act specifies that for high-risk AI systems, the design should prioritise
sufficient transparency to enable users to interpret the system’s output and utilise it
appropriately. As noted, it is essential to establish an appropriate level and form of
transparency to ensure compliance with respective obligations.
In relation, ENISA acknowledges the existing techno-legal gap concerning trans-
parency in AI systems and their importance for security. For example, it maintains

13 See Sect. 3.5 under fundamental rights of the AI Act.

K
14 International Cybersecurity Law Review (2024) 5:1–20

that: “The traceability and lineage of both data and AI components are not fully
addressed. The traceability of processes is addressed by several standards related to
quality. In that regard, ISO 9001 is the cornerstone of quality management. However,
the traceability of data and AI components throughout their life cycles remains an
issue that cuts across most threats and remains largely unaddressed”. The regulatory
document emphasises that that documentation in itself is not a security requirement,
and that for a security control, technical documentation is needed to ensure system
transparency.
Blockchain technology offers unique features that can enhance both the trans-
parency and auditability of AI systems, enabling stakeholders to hold them ac-
countable for their actions. One of the key advantages of blockchain is its inherent
transparency. By recording the entire lifecycle of an AI model on the blockchain
(or proof of the lifecycle to minimise on-chain data), including the data sources
used for training, the algorithms employed, and any subsequent updates or modifi-
cations, a verifiable trail is established. This comprehensive record enables auditors
and regulators to trace the decision-making process of the AI system, ensuring that
it adheres to ethical standards, legal requirements, and established guidelines. The
transparency of blockchain-based audit trails can help identify potential biases in
AI systems. Biases can arise from various sources, including biased training data or
discriminatory algorithmic design. With blockchain, relevant stakeholders including
auditors can examine the inputs, processes, and outputs of an AI system and detect
any potential biases or discriminatory patterns. This visibility fosters accountabil-
ity and allows for necessary interventions to mitigate biases and ensure fair and
equitable outcomes. Furthermore, blockchain’s immutability ensures the integrity
and tamper-resistance of the audit trail. Once recorded on the blockchain, the infor-
mation becomes practically unalterable, preventing unauthorised modifications or
tampering.
This feature ensures that the audit trail remains reliable and trustworthy, bol-
stering confidence in the accountability and transparency of AI systems. The use
of blockchain technology also facilitates cross-organizational audits and account-
ability. Multiple stakeholders, including developers, data providers, regulators, and
end-users, can access the blockchain-based audit trail and contribute to the auditing
process. This collaborative approach enhances the effectiveness of audits, promotes
shared responsibility, and strengthens the overall accountability framework surround-
ing AI systems. This is in line with the AI Act and can serve as an effective tool to
enforce reliable and more effective audits. In addition, incorporating blockchain as
a tool, could reduce the need of human oversight as noted by the Article 14 of the
AI Act—since rules could be encoded into a blockchain system and smart contracts
that guarantee a system’s compliance.
Overall, by leveraging blockchain technology, AI systems can better enforce
auditability requirements specified in the AI Act. The immutability, transparency,
traceability, consensus mechanisms, smart contracts, and data security features of
blockchain contribute to establishing a trustworthy and auditable framework for AI
systems. This enables auditors to examine compliance, fairness, and accountability
aspects of AI operations, promoting transparency and responsible AI development
and deployment.

K
International Cybersecurity Law Review (2024) 5:1–20 15

4.4 Identity and access management

As noted in Sect. 2, identity and access management is a crucial aspect of ensuring


the security of AI systems. Along the same lines, the AI Act specifies the need
for access control policies14 including a description of roles, groups, access rights,
and procedures for granting and revoking access. Under the Article 15, the AI
Act aims to ensure that appropriate access control is established in high-risk AI
systems to provide resilience against attempts by unauthorised parties to exploit the
system. A more detailed description of access control is given under the technical
guidelines for the implementation of minimum security measures for digital service
providers by ENISA [20]. The document also underlines the need of having a list of
authorised users who can access certain security functions including keeping logs
from privileged accounts’ usage.
Blockchain technology presents an opportunity to enhance identity management
and access control in a secure and decentralised manner. Traditional identity man-
agement systems often rely on centralised authorities or intermediaries to verify and
authenticate users. This centralised approach introduces vulnerabilities and single
points of failure that can be exploited by malicious actors. In contrast, blockchain-
based identity solutions, such as SSI, offer a more secure and user-centered ap-
proach. With SSI, individuals have control over their personal information and dig-
ital identities. For example, a blockchain company Dock utilises SSI technology to
allow people to self-manage their digital identities without depending on third-party
providers to store and manage the data [42]. The solution, however, still links the
users with verifiers (e.g., employers, banks, universities, etc.) to attest the validity
of a certain document (e.g., a student has graduated).
Blockchain enables the creation of unique, tamper-resistant digital identities that
are associated with cryptographic keys. These identities are stored on the blockchain
and can be securely managed by the individual themselves15. This decentralised
approach can eliminate some control that centralised identity providers currently
have and reduces the risk of unauthorised access or data breaches. Moreover, in the
context of AI systems, blockchain-based identity management can be leveraged to
control access to AI models and data sources.

1. Users: Users can be selectively granted access permissions to specific AI models


or datasets based on predefined rules and smart contracts. This allows for fine-
grained access control, ensuring that only authorised individuals or entities can
interact with the AI system. Users have the ability to maintain control over their
personal data and can choose to disclose only the necessary information to the
AI system. This reduces the reliance on third-party data custodians and minimises
the exposure of sensitive personal data. Furthermore, the immutability and trans-
parency of blockchain records provide a trustworthy audit trail of identity-related

14 Access control refers to the process of managing and regulating the permissions and privileges granted
to specific users or entities interacting with an AI system.
15 The potential drawback to this system is that a user can lose its key and may not be able to recuperate

his access.

K
16 International Cybersecurity Law Review (2024) 5:1–20

activities. Any changes or updates to identities, access permissions, or transac-


tions can be recorded on the blockchain, enabling accountability and traceability.
This can be particularly important in regulated environments or scenarios where
compliance with data protection regulations is necessary.
2. Developers: Access control can also refer to the permissions and privileges granted
to specific entities (e.g., developers) interacting with an AI system. It aims to pro-
tect extraction of sensitive data, prevent unauthorised access and code modifica-
tion, and maintain the integrity and confidentiality of the system. In the context
of AI and cybersecurity, access control involves implementing robust authentica-
tion and authorization mechanisms, establishing fine-grained access policies, and
enforcing secure roles and privileges. Specific parameters can be incorporated to
restrict access to critical systems by leveraging the capabilities of tamper-proof de-
centralised infrastructure such as blockchains, smart contracts, and oracles. Organ-
isations can define access restrictions and conditions by associating private keys
with specific actions or permissions within the AI system. For example, certain
critical system operations or sensitive data access can be tied to specific private
keys. The blockchain serves as the decentralised infrastructure that securely stores
and manages these private keys. Private keys can be securely stored in digital
wallets or key management systems, with access controls and encryption mecha-
nisms to prevent unauthorised use or tampering. The blockchain will also record
the ownership and transactions related to these private keys, ensuring transparency
and accountability, further reinforcing the AI Act standards on transparency.

5 Technology: a complementary tool to achieve legal compliance

In general, the EU position has been in line with implementing “Security-by-design”


mechanisms as a way to improve the overall cybersecurity of digital systems. Se-
curity-by-design is a concept in software engineering and product design that takes
security considerations into account at the early stages of product development (ex-
ante). This includes considering security risks and vulnerabilities at every stage
of development, from architecture and design to implementation, deployment, and
testing [19].
One of the novel things about implementing blockchain for AI is that this tech-
nology allows for the introduction of both ex-ante and ex-post measures that can
reinforce the overall cyber security of the system. In regard to our example on
high-risk AI systems, by storing information on a decentralised and tamper-resistant
blockchain, it becomes possible to establish a verifiable and auditable history of
an AI system’s development and behaviour. Overall, the idea behind verifiable and
immutable time-stamps allows for (ex-post) regulatory measures such as auditing
procedures. On the other hand, as an (ex-ante) measure, designing a smart contract
based “Access Control” system would require a predetermined set of characteristics
to be transposed on-chain.
Furthermore, Mueck et al. [34] maintain that for a proper enforcement of cyber
measures (in accordance with the AI Act), there is a need for establishment of AI
system architecture that would involve the creation of specific entities (namely: En-

K
International Cybersecurity Law Review (2024) 5:1–20 17

tity for Record Keeping, Entity for Risk Mitigation, Entity for AI Processing, Entity
for AI verification, etc.). The authors argue that the “Entity for Record Keeping”
is needed to be in charge of registering and administering the “loggings” of user
interactions and their connection with data, storage, and other parts of the system.
Similarly, this entity would be in charge of assuring that data was not modified or
altered in any way. As suggested, the “AI System Management Entity” would be
in charge of managing the interaction between the different entities, detecting any
possible issues or undesired behaviour.
While we do not argue against the relevance of establishing suitable regulatory
entities in order to reinforce and comply with the cyber measures in the AI Act, we
argue that blockchain can serve as a useful tool to a) reinforce the effectiveness of
a given entity’s tasks and b) establish a governance mechanism for decision-making
between entities. For example, in both of the situations above, blockchain can be
of help as it can provide a reliable and verifiable record of the data, detecting any
possible alteration. Via this tool the “Entity for Record Keeping” can have trusted
information on the data provenance, usage, date and time, etc. In the second case,
blockchain can serve as a useful governance mechanism between different entities.
In other words, blockchain allows for robust governance by providing a distributed
network where multiple entities participate in a consensus allowing for more trans-
parent processes in decisions making. For example, if a malicious behaviour such
as data poisoning by an unauthorised party is registered by one supervisory en-
tity, the system can signal to others entities to apply further verification. Similarly,
this reduces the ‘single point’ risk when one entity might be hacked/inaccessible.
Likewise, via the usage of smart contracts the decisions of all entities would be
accounted for and automated within the AI system architecture.

6 Limitations

It is important to note that while blockchain technology offers several advantages,


it may not be a suitable solution for all AI-related cyber risks. The implementation
of blockchain in AI systems requires careful consideration of factors like scala-
bility, performance, and the specific requirements of the application. Additionally,
blockchain technology itself is not immune to all cybersecurity threats as noted
by [38] and [41], and proper measures should be taken to secure the underlying
infrastructure and smart contracts associated with the blockchain implementation
[39].

7 Conclusion

In this article, we argue that blockchain technology offers unique set of properties
that can be harnessed to establish transparency, security and enhanced verification
in AI systems. As the European Union’s regulatory focus intensifies on cyberse-
curity challenges related to Artificial Intelligence (AI), in tandem with the AI Act
proposal, our objective is to illustrate how blockchain holds the potential to alleviate

K
18 International Cybersecurity Law Review (2024) 5:1–20

specific cybersecurity vulnerabilities associated with AI systems. We maintain that


the incorporation of blockchain technology can enable specific AI-based systems
to align with various provisions delineated in the AI Act. This alignment particu-
larly pertains to aspects such as data, data governance, record-keeping, transparency
assurance, and access control enforcement. We show how the decentralised and tam-
per-resistant attributes of blockchain offer solutions to fulfil these requisites, serving
as a promising basis for establishing more secure AI systems. The study also ex-
plores how blockchain can successfully address certain attack vectors related to AI
systems, such as data poisoning in trained AI models and data sets. The overall goal
of this analysis is to contribute to the progress of more secure AI implementations,
not only within the EU but also globally. We seek to bridge the divide between legal
and technical research by providing an interdisciplinary perspective of cybersecu-
rity in the AI domain. Ultimately, the study aims to provide meaningful insights to
aid policymakers in making informed decisions regarding the management of cyber
risks associated with AI system.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.

Conflict of interest S. Ramos and J. Ellul declare that they have no competing interests.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.
0/.

References

1. Abeshu A, Chilamkurti N (2018) Deep learning: the frontier for distributed attack detection in fog-to-
things computing. IEEE Commun Mag 56(2):169–175. https://doi.org/10.1109/MCOM.2018.1700332
2. Ahmed IM, Kashmola M, Ahmed IM (2021) Threats on machine learning technique by data poisoning
attack: a survey. Adv Cyber Secur. https://doi.org/10.1007/978-981-16-8059-536
3. Amin M et al (2023) Cyber security and beyond: detecting malware and concept drift in AI-based
sensor data streams using statistical techniques. Comput Electr Eng 108:108702. https://doi.org/10.
1016/j.compeleceng.2023.108702
4. Andraško J, Mesarčík M, Hamul’ák O (2021) The regulatory intersections between artificial intelli-
gence, data protection and cyber security: challenges and opportunities for the EU legal framework. Ai
Soc 36:623–636. https://doi.org/10.1007/s00146-020-01125-5
5. Bernal Bernabe J et al (2019) Privacy-preserving solutions for blockchain: review and challenges. IEEE
Access 7:164908–164940. https://doi.org/10.1109/ACCESS.2019.2950872
6. Biasin E, Kamenjasevié E (2022) Cybersecurity of medical devices: new challenges arising from the
AI Act and NIS 2 Directive proposals. Int Cybersecur Law Rev 3:163–180. https://doi.org/10.1365/
s43439-022-00054-x
7. Chiang JH et al (2023) Correlated-output-differential-privacy and applications to dark pools
8. Comiter M (2019) Attacking artificial intelligence: aI’s security vulnerability and what policymakers
can do about it. Belfer Center for Science and International Affairs, Harvard Kennedy School
9. European Union Agency for Cyber-security (ENISA) (2023) Cybersecurity of AI and Standardisation.
https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation

K
International Cybersecurity Law Review (2024) 5:1–20 19

10. Dietterich TG (2017) Steps toward robust artificial intelligence. Ai Mag 38(3):3–24. https://doi.org/10.
1609/aimag.v38i3.2756
11. Cointele-graph (2023) Dusk network tackles financial privacy concerns with daybreak. https://
cointelegraph.com/press-releases/dusk-network-tackles-financial-privacy-concerns-with-daybreak
12. Ellul J (2022) Should we regulate Artificial Intelligence or some uses of software? Discov Artif Intell
2(1)
13. Ellul J et al (2023) When is good enough good enough? On software assurances. ERA Forum. https://
doi.org/10.1007/s12027-022-00728-3
14. ENISA Research and Innovation Brief, Artificial Intelligence and Cybersecurity Research. June 2023.
15. EU’s Cybersecurity strategy for the digital decade. https://digital-strategy.ec.europa.eu/en/library/eus
-cybersecurity-strategy-digital-decade-0
16. European Commission (2021) Proposal for a regulation of the European parliament and of the council
laying down harmonised rules on artificial intelligence and amending certain union legislative acts
(artificial intelligence act). COM(2021) 206
17. European Commission’s High-Level Expert Group on Artificial Intelligence (2018) Definition of AI.
https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf
18. Gibson, Dunn & Crutcher LLP (2023) European parliament adopts its negotiating position on the EU
AI act. https://www.gibsondunn.com/wp-content/uploads/2023/06/european-parliament-adopts-its-
negotiating-position-on-the-eu-ai-act.pdf
19. European Union Agency for Cybersecurity (ENISA) (2023) Artificial intelligence and cybersecurity
research
20. European Union Agency for Cybersecurity (ENISA) (2017) Technical guidelines for the implementa-
tion of minimum security measures for digital service providers
21. Galal HS, Youssef AM (2021) Publicly verifiable and secrecy preserving periodic auctions
22. (2022) IDC worldwide semiannual artificial intelligence tracker. https://www.idc.com/
23. IPFS (2023) InterPlanetary File System (IPFS). https://ipfs.tech/
24. Ebenezer Isaac RHP, Reno J (2023) AI product security: a primer for developers ”.
25. Kaloudi N, Jingyue L (2021) The AI-based cyber threat landscape: a survey. ACM Comput Surv
53(1):Article 20. https://doi.org/10.1145/3372823 (34 pages)
26. Li J (2018) Cyber security meets artificial intelligence: a survey. Frontiers Inf Technol Electronic Eng
19:1462–1474. https://doi.org/10.1631/FITEE.1800573
27. Liang B et al (2017) Detecting adversarial image examples in deep networks with adaptive noise re-
duction. arXiv: 1705.08378 (https://arxiv.org/abs/1705.08378)
28. Lindell Y (2021) Secure multiparty computation. Commun ACM 64(1):86–96. https://doi.org/10.1145/
3387108
29. Lohn A (2020) Hacking AI: a primer for Policymakers on machine learning cybersecurity
30. Mahmud H et al (2022) What influences algorithmic decision-making? A systematic literature review
on algorithm aversion. Technol Forecast Soc Change 175:121390
31. Mamoshina P et al (2017) Converging blockchain and next-generation artificial intelligence technolo-
gies to decentralize and accelerate biomedical research and healthcare. Oncotarget 9(5):5665–5690.
https://doi.org/10.18632/oncotarget.22345
32. Markovic M et al (2021) The accountability fabric: a suite of semantic tools for managing AI system
accountability and audit
33. Meng X et al (2017) MCSMGS: Malware classification model based on deep learning. In: Proceed-
ings of the international conference on cyber-enabled distributed computing and knowledge discovery,
pp 272–275
34. Mueck MD, Elazari Bar OA, Du Boispean S (2023) Upcoming European regulations on artificial intel-
ligence and cybersecurity
35. Nassar M et al (2019) Blockchain for explainable and trustworthy artificial intelligence. https://doi.org/
10.1002/widm.1340
36. Neumann V et al (2023) Examining public views on decentralised health data sharing. PLoS ONE
18(3):e282257. https://doi.org/10.1371/journal.pone.0282257
37. Ramirez MA et al (2022) Poisoning attacks and defences on artificial intelligence: a survey. arXiv
preprint arXiv:2202.10276
38. Ramos S et al (2021) A great disturbance in the crypto: understanding cryptocurrency returns under
attacks. Blockchain Res Appl 2(3):100021. https://doi.org/10.1016/j.bcra.2021.100021
39. Ramos S, Mélon L, Ellul J (2022) Exploring blockchains cyber security techno-regulatory gap. An
application to crypto-asset regulation in the EU ”. In (27 Pages Posted: 22 Jul 2022)

K
20 International Cybersecurity Law Review (2024) 5:1–20

40. Regulatory Framework for Artificial Intelligence. European Commission Digital Strategy. https://
digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
41. Saad M et al (2021) Exploring the attack surface of Blockchain: a systematic overview
42. Dock (2023) Self-sovereign identity. https://www.dock.io/post/self-sovereign-identity
43. Shinde R et al (2021) Blockchain for securing AI applications and open innovations. J Open Innov
Technol Mark Complex 7(3):189. https://doi.org/10.3390/joitmc7030189
44. Short AR et al (2020) Using blockchain technologies to improve security in federated learning systems.
In: 2020 IEEE 44th annual computers, software, and applications conference (COMPSAC) Madrid,
pp 1183–1188 https://doi.org/10.1109/COMPSAC48688.2020.00-96
45. Taddeo M, Floridi L (2018) How AI can be a force for good. Science 361:751–752. https://doi.org/10.
1126/science.aat5991
46. Tolpegin V et al Data poisoning attacks against federated learning systems. Georgia Institute of Tech-
nology
47. Tufail S, Batool S, Sarwat AI (2021) False data injection impact analysis in AI-based smart grid. In:
SoutheastCon 2021, pp 1–7 https://doi.org/10.1109/SoutheastCon45413.2021.9401940
48. Evasion Attacks on Machine Learning or Adversarial Examples. Towards Data Science. https://towar
dsdatascience.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1
49. Wang Y et al (2023) Adversarial attacks and defences in machine learning-powered networks: a con-
temporary survey
50. Xin Y et al (2018) Machine learning and deep learning methods for cybersecurity. IEEE Access
6:35365–35381
51. Yampolskiy RV, Spellchecker MS (2016) Artificial intelligence safety and cybersecurity: a timeline of
AI failures
52. Yerlikaya FA, Bahtiyar S (2022) Data poisoning attacks against machine learning algorithms. Expert
Syst Appl 208:118101. https://doi.org/10.1016/j.eswa.2022.118101
53. Zhang C, Wu C, Wang X (2020) Overview of blockchain consensus mechanism. In: Proceedings of
the 2020 2nd international conference on big data engineering BDE 2020, Shanghai. Association for
Computing Machinery, pp 7–12 https://doi.org/10.1145/3404512.3404522

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

You might also like