Blockchain For Artificial Intelligence
Blockchain For Artificial Intelligence
https://doi.org/10.1365/s43439-023-00107-9
ORIGINAL PAPER
Received: 9 August 2023 / Accepted: 1 December 2023 / Published online: 25 January 2024
© The Author(s) 2024
Simona Ramos
Department of Information and Communication Technologies Engineering (ETIC), University
Pompeu Fabra, Barcelona, Spain
E-Mail: [email protected]
Joshua Ellul
Centre for Distributed Ledger Technologies, University of Malta, Msida, Malta
E-Mail: [email protected]
K
2 International Cybersecurity Law Review (2024) 5:1–20
1 Introduction
1 See section on Open Issues and Challenges under the Artificial Intelligence and Cybersecurity Research
K
International Cybersecurity Law Review (2024) 5:1–20 3
2 See Article 15 of the Proposal for a Regulation of the European Parliament and of the Council Lay-
ing Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final
(April 2021).
K
4 International Cybersecurity Law Review (2024) 5:1–20
When it comes to the intersection between blockchain and AI, Shinde et al.,
(2021) [43] have performed a bibliometric and literature analysis of how blockchain
provides a security blanket to AI-based systems. Likewise, Mamoshina et al. (2017)
[31] review emerging blockchain applications specifically targeting the AI area. They
also identify and discuss open research challenges of utilising blockchain technolo-
gies for AI. Furthermore, the authors converge blockchain and next-generation AI
technologies as a way to decentralise and accelerate biomedical research and health-
care. Short et al. (2020) [44] examine how blockchain based technologies can be
used to improve security in federated learning systems.
To the best of our knowledge, there is a lack of research examining whether
blockchain could serve as a tool for achieving compliance with legal AI cybersecurity
requirements. In line with Ellul et al. (2023) [13], who maintain that the problem of
technology regulation can be also addressed through the use of technology itself, in
the following paragraphs we aim to examine how blockchain can be used to mitigate
certain cybersecurity risks and attacks related to high-risk AI systems and to what
extent these measures meet some of the cyber requirements positioned in the AI
Act.
More specifically, we propose that blockchain can (a) mitigate certain cyber
attacks, such as data poisoning in trained AI models and datasets. Likewise, by em-
ploying decentralised infrastructure and blockchain technology, (b) AI systems can
benefit from cryptographically-secured guardrails, reducing the likelihood of misuse
or exploitation for adversarial purposes. Furthermore, we explore (c) how developers
can restrict AI’s access to critical infrastructure through tamper-proof decentralised
infrastructure such as blockchains and smart contracts. Additionally, we examine
(d) how blockchain can enable secure and transparent data sharing mechanisms
through decentralised storage, augmenting data integrity and immutability in AI
systems. Furthermore, we analyse (e) how blockchain facilitates independent audits
and verification of AI systems, ensuring their intended functionality and mitigating
concerns related to bias and malicious behaviour.
By leveraging blockchain technology, AI systems can align with some of the
requirements mandated in the AI Act, specifically in terms of data, data governance,
record-keeping, transparency and access control. Blockchain’s decentralised and
tamper-proof nature helps address some of these requirements, providing a potential
foundation for accountable and trustworthy AI systems. Through this research, this
article sheds light on the potential of blockchain technology in fortifying high-
risk AI systems against cyber risks, contributing to the advancement of secure and
trustworthy AI deployments (both in the EU and beyond) and to guide policy makers
in their decisions concerning AI cyber risk management.
The rest of the paper is organised as follows: in Sect. 2, we provide a general
overview of the cybersecurity risks in AI systems emphasising attack vectors relevant
for our analysis. In Sect. 3, we touch upon the AI Act and cybersecurity. In Sect. 4,
we delve into analysing the application of blockchain as a cybersecurity tool in
mitigating certain cyber risks of AI, in parallel with some of the requirements of the
AI Act. In Sect. 5 we present some closing thoughts before concluding the article.
K
International Cybersecurity Law Review (2024) 5:1–20 5
Under the hood, AI systems typically make use of machine learning, logic based
reasoning, knowledge driven approaches, target-driven optimisation (given some
fitness function), or some other form of statistical technique. Indeed, the definition
of AI has been debated for decades and it is not the intention of this paper to add
to this debate, and neither support a particular definition of AI or what should be
classified as AI or not. Yet, we discuss solutions that blockchain can pose for many
types of AI systems (and potentially all systems depending upon one’s definition of
AI).
Many such AI systems have the capability to operate within the realm of human-
defined objectives, generating a spectrum of outputs that exert profound influence
over the environments they interact with for example consider AI algorithms used
to moderate, filter, and promote different content which can sway the public nar-
rative. Through their intrinsic computational prowess, AI systems can manifest as
tools for generating high-quality content, making accurate predictions, offering per-
sonalised recommendations, and rendering impactful decisions. If done right, these
outputs possess the potential to reshape industries, optimise processes across a broad
spectrum of domains and affect the fabric of society [45].
Upon collecting information, AI system engineers need to develop into such
systems a profound process of interpretation, potentially leveraging vast knowledge
repositories to extract meaning, identify patterns, and draw insights from the past
data and/or the data at hand. Armed with this synthesised understanding, they are
used to perform intricate reasoning, whilst contemplating a multitude of factors,
associations, and dependencies to arrive at informed decisions. By integrating logical
frameworks, probabilistic reasoning, and pattern recognition techniques, AI systems
possess the aptitude to unravel complex problems, devise innovative strategies, and
chart a course of action tailored to achieving their prescribed goals [10, 30].
However, AI systems are not impervious to vulnerabilities or weak points, as
they can be targeted by various means, including attacks that exploit their inherent
architecture, limitations, or weaknesses [26]. These attacks can encompass a wide
range of techniques, targeting underlying algorithms, data inputs, which may even
involve exploiting physical components connected to AI systems. The susceptibil-
ity of AI systems particularly arises from their complex and interconnected nature,
which creates many opportunities for adversaries to exploit potential weaknesses in
their design, implementation, or deployment. In certain situations, AI systems may
need specific cybersecurity defence and protection mechanisms to combat adver-
saries [26]. While one cannot ensure a fully secure AI system [51], in the following
sections we take a close look at some prevalent cybersecurity risks concerning AI
systems and how they can be mitigated with the help of blockchain technology.
This article does not aim to provide a comprehensive overview of all AI cyber
attacks, as it is a complex and extensive topic that warrants volumes of literature.
Yet, we will focus on specific vulnerabilities and threats, for which blockchain can
K
6 International Cybersecurity Law Review (2024) 5:1–20
be a useful tool. In particular, we discuss data and human factors as potential attack
vectors that can be exploited to target AI systems. The explanations provided are
not exhaustive but serve as illustrative examples to enhance readers’ understanding
in the second part of the article.
Input attacks involve manipulating inputs that will be fed into an AI system in
aim of achieving the attacker’s desired outcome to alter the system’s output [8].
Since AI systems function (like ‘machines’) that take input, perform computations,
and generate output, manipulating the input can enable attackers to influence the
system’s output. The importance that data plays throughout the lifecycle of such
systems cannot be overestimated, from the building and validation of such systems
to its live operation, it is at the core of the learning process of machine learning
models. One of the most prevalent input attack vectors involves poisoning (i.e.
manipulating) data utilised to train such models [2, 46]. Data poisoning attacks
are a major concern in AI cybersecurity as they can cause substantial damage that
can lead to undesirable socio-economic consequences. Consider a scenario where
a public sector AI system is used to calculate levels of social help that should be
given to (poor) families. Then consider that an attacker could poison the data so
that the system delivers a result that particular types of families are not entitled to
support.
Likewise, consider an attack scenario where the attacker has gained access to
the training data and is able to manipulate it, such as incorrect labels or biased
information. This attack leverages the vulnerability of machine learning models to
the quality and integrity of training data. If the attacker can inject poisoned data
that influences the model’s learning process, they can alter its decision boundaries
and compromise its performance [52]. Data poisoning attacks can occur at different
stages, including during data collection, processing, or labelling. Adversaries may
use various techniques, such as injecting biased samples, modifying existing data
points, or even tampering data within the training pipeline itself [37]. Arguably, data
is the “water, food and air of AI”—and therefore, poisoning the data, one can attack
the whole (or most) of an AI system [8].
Another similar form of attack targets deep neural networks3. Here, the attacker
introduces subtle modifications in an attempt to manipulate the AI system’s predic-
tions. For example, attacks such as projected gradient descent (PGD) and square
attack exploit the model’s sensitivity to small and carefully crafted perturbations in
the input data, causing the deep neural networks to produce false predictions [49].
As noted, data alterations can be carefully designed to deceive the system, causing
it to produce incorrect or biased results. These attacks can be challenging to detect,
especially if the modifications are carefully designed to evade detection mechanisms
3 Deep neural networks (DNNs) are a crucial component of the artificial intelligence (AI) landscape due to
their ability to perform complex tasks such as object detection, image classification, language translation,
etc.
K
International Cybersecurity Law Review (2024) 5:1–20 7
Attackers may attempt to manipulate or deceive individuals with access to the sys-
tem, such as administrators or users, into revealing sensitive information, sharing
credentials, or performing actions that compromise the system’s security. Likewise,
developers play a key role in building, maintaining, and securing AI systems. Devel-
opers typically have privileged access to underlying code, infrastructure, datasets,
and configuration settings of AI systems. They possess the technical knowledge and
expertise required to modify, update, and maintain such systems. However, their ac-
cess also presents a potential vulnerability that can be exploited by malicious actors
through various means including social engineering.
Consider a code alteration type of attack, where a malicious party gains access
and a modification is made to the code of an AI system (which may include model
parameters) in order to manipulate its behaviour or achieve malicious objectives4.
While this could be also said for other types of systems, one of the main differences
(between traditional systems and AI-based systems) is that such changes may result
in system behaviour that still seems to be correct. Also, code alterations in high-
risk AI systems, can have detrimental consequences to users and society in general.
For example, an autonomous driving system relies on computer vision algorithms to
detect traffic signs. In a code alteration attack, an attacker could modify the source
code responsible for sign recognition to deliberately misclassify stop signs as yield
signs. This alteration could lead to potentially dangerous situations on the road, as
the autonomous vehicle may not respond correctly to the altered signs.
This brings to light the importance of access control protection for developers
and other important stakeholders as an essential security measure. Isaac et al. [24]
maintain that if developers’ access is not properly protected, attackers may gain
unauthorised access to their accounts or exploit their privileges to modify the code,
inject malicious components, or introduce vulnerabilities in the AI system. More-
4 These attacks typically target the underlying algorithms, configurations, or functionality of the AI sys-
tem.
K
8 International Cybersecurity Law Review (2024) 5:1–20
over, developers often have access to sensitive data used in AI systems. Inadequate
access controls can expose this data to unauthorised access or increase the risk of
data theft, leading to breaches of confidentiality and potential harm to individuals
or organisations.
5 Nonetheless, it remains unclear when the AI Act will come into force, given anticipated debate over
down Harmonised Rules on Artificial Intelligence and amending certain Union Legislative Acts (Artificial
Intelligence Act), COM(2021) 206 (April 21, 2021).
7 Procedures related to the reporting of serious incidents and of malfunctioning in accordance with Arti-
K
International Cybersecurity Law Review (2024) 5:1–20 9
K
10 International Cybersecurity Law Review (2024) 5:1–20
planned) related to the cybersecurity of AI, assessing their coverage and identifying
gaps in standardisation [9]. The report examines the role of cyber security within
a set of requirements outlined by the AI Act such as data, data governance, record
keeping, risk management, etc.
Overall, the AI Act recognizes the significance of cybersecurity in AI systems
and establishes measures to ensure their resilience against cyber threats. By incor-
porating cybersecurity requirements, risk assessment and mitigation, data security,
transparency, incident reporting, and compliance mechanisms, the AI Act aims to
promote the safe and secure deployment of AI technologies in the European Union.
4 Blockchain for AI: a tool to achieve compliance with cyber and data
security requirements under the EU AI act
Data integrity and immutability are critical aspects of ensuring the reliability, se-
curity and trustworthiness of AI systems. The AI Act highlights the significance
of employing high-quality training data, unbiased datasets, and ensuring that AI
systems are not trained on discriminatory or illegal data. The Act states that data
quality should be reinforced by the use of tools that verify the source of data and the
integrity of data (i.e. to prove that data has not been manipulated). It also underlines
that access to data should be limited to those specifically positioned to access it.
Article 15 of the AI Act argues for the implementation of “technical solutions to
address AI specific vulnerabilities including, where appropriate, measures to prevent
and control for attacks trying to manipulate the training dataset (‘data poisoning’),
inputs designed to cause the model to make a mistake (‘adversarial examples’), or
model flaws”.
Blockchain technology offers a robust solution to address these concerns by
providing a decentralised and tamper-resistant ledger for securely transferring, stor-
ing and verifying data [53]. Indeed, it must be noted that data stored in a public
blockchain implies that the data would be available for anyone to see, yet various
different techniques may be adopted to both: (i) ensure data is kept private (and not
directly stored on a public blockchain); and (ii) ensure data integrity can be upheld
(through storing cryptographic hashes of data on a blockchain).
Blockchain’s immutability feature mitigates these risks by creating a permanent
record of data transactions that cannot be altered or tampered with. When data is
recorded on the blockchain, it is stored across all nodes in the network, forming
a decentralised and synchronised ledger. New data, such as the addition or modifi-
cation of training data, is cryptographically linked to previous transactions, creating
a chain of blocks that is resistant to modification [38]. This ensures that once data
is added to a blockchain, it becomes impossible or infeasible to alter or manipulate
without the consensus of the network participants. Any attempts to tamper with the
data would require significant computational power and/or consensus among the
majority of network participants, making it economically and practically infeasible.
Furthermore, applications digitally sign data transmitted to a blockchain, and there-
K
International Cybersecurity Law Review (2024) 5:1–20 11
fore it would be possible for an application to verify whether any data the application
itself has submitted has since been manipulated.
These features of immutability and verifiability can further help applications to
comply with the AI Act’s proposition regarding incorporating ‘logs’ in AI-based
systems. The EU emphasises the need of having high-risk AI systems designed and
developed with capabilities enabling the automatic recording of events (logs) during
operation of such systems11.
By leveraging blockchain for data integrity, AI systems can maintain a reliable and
verifiable record of training data used—indeed, as discussed, consideration would
need to be given with respect to the type of blockchain used (public/permissioned/
hybrid) and the extent to what data is stored on the blockchain (e.g., raw data on-
chain, or cryptographic hashes on-chain with off-chain raw data, or some other
suitable configuration). To further emphasise the point, blockchains can facilitate
data provenance, date, and time of recording and other characteristics. This can also
enable transparency and trust in data sources and provide a means to verify that
AI models are trained on accurate and untampered data. As discussed, indeed, the
characteristics of blockchain technology align with several requirements outlined
in the AI Act, specifically in relation to data and data governance, record-keeping,
transparency, and the provision of information to users.
It is important to note that while blockchains ensure data integrity and immutabil-
ity, they do not guarantee the quality or accuracy of the data itself. Blockchain tech-
nology can provide assurances that the data has not been tampered with, but it does
not address the issue of data bias, incompleteness, or representativeness. Ensuring
the quality and reliability of the data used for training AI systems remains a separate
challenge that requires additional research.
According to the AI Act: “European common data spaces established by the Com-
mission and the facilitation of data sharing between businesses and with govern-
ment in the public interest will be instrumental to provide trustful, accountable and
nondiscriminatory access to high quality data for the training, validation and testing
of AI systems”.12 Moreover, the AI ACT maintains that in order to facilitate the
development of high-risk AI systems, specific actors, including digital innovation
hubs, testing experimentation facilities, researchers, experts etc. should have access
to and utilise high-quality datasets within their relevant fields of activities, as per
the guidelines set by this Regulation. In relation, secure data sharing and storing
can become critical concerns when it comes to collaborative AI training systems
involving multiple parties.
Blockchain technology can provide solutions that enable secure data sharing
among parties, facilitating collaboration while maintaining data privacy to a certain
extent. Although still developing, the field of privacy preserving blockchain solu-
tions is on the rise. Bernabe et al. [5] discuss novel privacy-preserving solutions
K
12 International Cybersecurity Law Review (2024) 5:1–20
for blockchains, where users can remain anonymous and take control of their per-
sonal data following a self-sovereign identity (SSI) model. Moreover, Dusk network
leverages zero-knowledge technologies to allow for transactions on the blockchain
to benefit from confidentiality [11]. In other words, the network acts like a public
record with smart contracts that store relevant information in a confidential fashion,
thus solving the shortcomings of similar platforms, such as Ethereum. Furthermore,
Galal and Youssef [21] build a publicly verifiable and secrecy preserving blockchain
based auction protocol to address privacy concerns.
Blockchain along with secure multiparty computation (MPC) techniques can be
used to allow multiple entities to collectively train AI models while keeping their
individual data private whilst at the same time providing guarantees with respect to
future verifiability of the data such models were trained on. MPC enables computa-
tion on encrypted data, ensuring that no participant gains access to another party’s
sensitive information [7, 28]. In this case, the blockchain serves as a trusted inter-
mediary that orchestrates the computation and provides guarantees in respect to the
integrity of the training process.
Likewise, through the use of smart contracts, the rules and protocols for data
sharing and collaborative training can be defined and enforced on the blockchain
[35]. Smart contracts could be used to specify the conditions under which data can be
accessed, processed, and shared among the participating entities—yet it is important
to note that control to accessing such data needs to be handled by a centralised
component (since all data on a public blockchain is publicly available). This can
help ensure that data sharing occurs in a controlled and auditable manner, promoting
transparency and trust among participants. By leveraging blockchain for auditable
data sharing, participants can retain ownership and more control over their data
(stored off-chain) while still able to benefit from collective intelligence and insight
gained through collaborative AI training.
The InterPlanetary File System (IPFS) is a distributed file system that provides
a decentralised approach to storing and sharing files across a network [23]. It enables
secure and efficient content addressing, making data retrieval resilient to censorship
and data corruption in a public manner, i.e. all data is publicly available. IPFS uses
content-addressable storage, which ensures that files are uniquely identified by their
content rather than their location, thus enabling tamper-resistant data sharing since
any change in content would result in a different file address (the address and the
content are intimately linked).
Overall, decentralised data sharing aligns with the principles and objectives out-
lined in the AI Act by promoting transparency and accountability. The AI Act places
importance on data protection and security. Decentralised data sharing can enhance
data tamper-proofness by utilising cryptographic techniques, access controls, and
distributed storage mechanisms. By distributing data across a network of nodes, de-
centralised systems reduce the risk of a single point of failure. Moreover, the AI Act
emphasises the rights of individuals regarding their data and the necessity for ob-
taining explicit user consent. Decentralised data sharing aligns with these principles
by giving users greater control over their data. Through decentralised technologies
like blockchain, users can directly manage and grant access to their data, ensuring
that their consent is obtained and that they have a say in how their data is used
K
International Cybersecurity Law Review (2024) 5:1–20 13
yet the actual storage providers (whether centralised or decentralised) must still be
trusted to release data only when such blockchain-based access control policies are
followed.
Furthermore, the AI Act emphasises the ethical implications of AI systems, in-
cluding fairness, accountability, and non-discrimination. Decentralised data sharing
can support these ethical considerations by enabling collective decision-making,
facilitating consensus, and transparent governance models [36]. These features pro-
mote fairness, accountability, and can help prevent discriminatory practices in data
sharing and AI system development since the actual development and learning pro-
cesses become more open and democratised. Likewise, the AI Act promotes inter-
operability and data portability to foster competition and innovation. Decentralised
data sharing can facilitate interoperability by enabling different AI systems to access
and utilise data from various sources in a standardised and tamper-proof manner. It
may also facilitate data portability, as users can easily share their data across differ-
ent platforms or services without being locked into a specific provider’s ecosystem
provided that standardised interfaces or means of connecting such different systems/
data models are made available.
Auditing and accountability are crucial aspects in ensuring the responsible and eth-
ical deployment of AI systems [32]. Many of today’s AI systems are closed-source.
Without access to the code and algorithmic details, it becomes hard-to-impossible/
infeasible to identify whether biases exist within models. Moreover, without ac-
cess to source code, external entities such as experts, auditors, or regulatory bodies
face challenges in conducting thorough audits or assessments of a system’s fair-
ness, bias, or potential vulnerabilities. Likewise, code alterations and data poisoning
attacks might be harder to detect in closed systems.
The AI Act states the obligation for ex ante testing, risk management and human
oversight to minimise the risk of erroneous or biased AI-assisted decisions in critical
areas such as education and training, employment, important services, law enforce-
ment, and the judiciary13. The proposed regulation puts a high importance on both
audit and transparency. For example, under Annex 7 the document states that: “the
body shall carry out periodic audits to make sure that the provider maintains and
applies the quality management system and shall provide the provider with an audit
report. In the context of those audits, the notified body may carry out additional tests
of the AI systems for which an EU technical documentation assessment certificate
was issued.”
The AI Act specifies that for high-risk AI systems, the design should prioritise
sufficient transparency to enable users to interpret the system’s output and utilise it
appropriately. As noted, it is essential to establish an appropriate level and form of
transparency to ensure compliance with respective obligations.
In relation, ENISA acknowledges the existing techno-legal gap concerning trans-
parency in AI systems and their importance for security. For example, it maintains
K
14 International Cybersecurity Law Review (2024) 5:1–20
that: “The traceability and lineage of both data and AI components are not fully
addressed. The traceability of processes is addressed by several standards related to
quality. In that regard, ISO 9001 is the cornerstone of quality management. However,
the traceability of data and AI components throughout their life cycles remains an
issue that cuts across most threats and remains largely unaddressed”. The regulatory
document emphasises that that documentation in itself is not a security requirement,
and that for a security control, technical documentation is needed to ensure system
transparency.
Blockchain technology offers unique features that can enhance both the trans-
parency and auditability of AI systems, enabling stakeholders to hold them ac-
countable for their actions. One of the key advantages of blockchain is its inherent
transparency. By recording the entire lifecycle of an AI model on the blockchain
(or proof of the lifecycle to minimise on-chain data), including the data sources
used for training, the algorithms employed, and any subsequent updates or modifi-
cations, a verifiable trail is established. This comprehensive record enables auditors
and regulators to trace the decision-making process of the AI system, ensuring that
it adheres to ethical standards, legal requirements, and established guidelines. The
transparency of blockchain-based audit trails can help identify potential biases in
AI systems. Biases can arise from various sources, including biased training data or
discriminatory algorithmic design. With blockchain, relevant stakeholders including
auditors can examine the inputs, processes, and outputs of an AI system and detect
any potential biases or discriminatory patterns. This visibility fosters accountabil-
ity and allows for necessary interventions to mitigate biases and ensure fair and
equitable outcomes. Furthermore, blockchain’s immutability ensures the integrity
and tamper-resistance of the audit trail. Once recorded on the blockchain, the infor-
mation becomes practically unalterable, preventing unauthorised modifications or
tampering.
This feature ensures that the audit trail remains reliable and trustworthy, bol-
stering confidence in the accountability and transparency of AI systems. The use
of blockchain technology also facilitates cross-organizational audits and account-
ability. Multiple stakeholders, including developers, data providers, regulators, and
end-users, can access the blockchain-based audit trail and contribute to the auditing
process. This collaborative approach enhances the effectiveness of audits, promotes
shared responsibility, and strengthens the overall accountability framework surround-
ing AI systems. This is in line with the AI Act and can serve as an effective tool to
enforce reliable and more effective audits. In addition, incorporating blockchain as
a tool, could reduce the need of human oversight as noted by the Article 14 of the
AI Act—since rules could be encoded into a blockchain system and smart contracts
that guarantee a system’s compliance.
Overall, by leveraging blockchain technology, AI systems can better enforce
auditability requirements specified in the AI Act. The immutability, transparency,
traceability, consensus mechanisms, smart contracts, and data security features of
blockchain contribute to establishing a trustworthy and auditable framework for AI
systems. This enables auditors to examine compliance, fairness, and accountability
aspects of AI operations, promoting transparency and responsible AI development
and deployment.
K
International Cybersecurity Law Review (2024) 5:1–20 15
14 Access control refers to the process of managing and regulating the permissions and privileges granted
to specific users or entities interacting with an AI system.
15 The potential drawback to this system is that a user can lose its key and may not be able to recuperate
his access.
K
16 International Cybersecurity Law Review (2024) 5:1–20
K
International Cybersecurity Law Review (2024) 5:1–20 17
tity for Record Keeping, Entity for Risk Mitigation, Entity for AI Processing, Entity
for AI verification, etc.). The authors argue that the “Entity for Record Keeping”
is needed to be in charge of registering and administering the “loggings” of user
interactions and their connection with data, storage, and other parts of the system.
Similarly, this entity would be in charge of assuring that data was not modified or
altered in any way. As suggested, the “AI System Management Entity” would be
in charge of managing the interaction between the different entities, detecting any
possible issues or undesired behaviour.
While we do not argue against the relevance of establishing suitable regulatory
entities in order to reinforce and comply with the cyber measures in the AI Act, we
argue that blockchain can serve as a useful tool to a) reinforce the effectiveness of
a given entity’s tasks and b) establish a governance mechanism for decision-making
between entities. For example, in both of the situations above, blockchain can be
of help as it can provide a reliable and verifiable record of the data, detecting any
possible alteration. Via this tool the “Entity for Record Keeping” can have trusted
information on the data provenance, usage, date and time, etc. In the second case,
blockchain can serve as a useful governance mechanism between different entities.
In other words, blockchain allows for robust governance by providing a distributed
network where multiple entities participate in a consensus allowing for more trans-
parent processes in decisions making. For example, if a malicious behaviour such
as data poisoning by an unauthorised party is registered by one supervisory en-
tity, the system can signal to others entities to apply further verification. Similarly,
this reduces the ‘single point’ risk when one entity might be hacked/inaccessible.
Likewise, via the usage of smart contracts the decisions of all entities would be
accounted for and automated within the AI system architecture.
6 Limitations
7 Conclusion
In this article, we argue that blockchain technology offers unique set of properties
that can be harnessed to establish transparency, security and enhanced verification
in AI systems. As the European Union’s regulatory focus intensifies on cyberse-
curity challenges related to Artificial Intelligence (AI), in tandem with the AI Act
proposal, our objective is to illustrate how blockchain holds the potential to alleviate
K
18 International Cybersecurity Law Review (2024) 5:1–20
Conflict of interest S. Ramos and J. Ellul declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.
0/.
References
1. Abeshu A, Chilamkurti N (2018) Deep learning: the frontier for distributed attack detection in fog-to-
things computing. IEEE Commun Mag 56(2):169–175. https://doi.org/10.1109/MCOM.2018.1700332
2. Ahmed IM, Kashmola M, Ahmed IM (2021) Threats on machine learning technique by data poisoning
attack: a survey. Adv Cyber Secur. https://doi.org/10.1007/978-981-16-8059-536
3. Amin M et al (2023) Cyber security and beyond: detecting malware and concept drift in AI-based
sensor data streams using statistical techniques. Comput Electr Eng 108:108702. https://doi.org/10.
1016/j.compeleceng.2023.108702
4. Andraško J, Mesarčík M, Hamul’ák O (2021) The regulatory intersections between artificial intelli-
gence, data protection and cyber security: challenges and opportunities for the EU legal framework. Ai
Soc 36:623–636. https://doi.org/10.1007/s00146-020-01125-5
5. Bernal Bernabe J et al (2019) Privacy-preserving solutions for blockchain: review and challenges. IEEE
Access 7:164908–164940. https://doi.org/10.1109/ACCESS.2019.2950872
6. Biasin E, Kamenjasevié E (2022) Cybersecurity of medical devices: new challenges arising from the
AI Act and NIS 2 Directive proposals. Int Cybersecur Law Rev 3:163–180. https://doi.org/10.1365/
s43439-022-00054-x
7. Chiang JH et al (2023) Correlated-output-differential-privacy and applications to dark pools
8. Comiter M (2019) Attacking artificial intelligence: aI’s security vulnerability and what policymakers
can do about it. Belfer Center for Science and International Affairs, Harvard Kennedy School
9. European Union Agency for Cyber-security (ENISA) (2023) Cybersecurity of AI and Standardisation.
https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation
K
International Cybersecurity Law Review (2024) 5:1–20 19
10. Dietterich TG (2017) Steps toward robust artificial intelligence. Ai Mag 38(3):3–24. https://doi.org/10.
1609/aimag.v38i3.2756
11. Cointele-graph (2023) Dusk network tackles financial privacy concerns with daybreak. https://
cointelegraph.com/press-releases/dusk-network-tackles-financial-privacy-concerns-with-daybreak
12. Ellul J (2022) Should we regulate Artificial Intelligence or some uses of software? Discov Artif Intell
2(1)
13. Ellul J et al (2023) When is good enough good enough? On software assurances. ERA Forum. https://
doi.org/10.1007/s12027-022-00728-3
14. ENISA Research and Innovation Brief, Artificial Intelligence and Cybersecurity Research. June 2023.
15. EU’s Cybersecurity strategy for the digital decade. https://digital-strategy.ec.europa.eu/en/library/eus
-cybersecurity-strategy-digital-decade-0
16. European Commission (2021) Proposal for a regulation of the European parliament and of the council
laying down harmonised rules on artificial intelligence and amending certain union legislative acts
(artificial intelligence act). COM(2021) 206
17. European Commission’s High-Level Expert Group on Artificial Intelligence (2018) Definition of AI.
https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf
18. Gibson, Dunn & Crutcher LLP (2023) European parliament adopts its negotiating position on the EU
AI act. https://www.gibsondunn.com/wp-content/uploads/2023/06/european-parliament-adopts-its-
negotiating-position-on-the-eu-ai-act.pdf
19. European Union Agency for Cybersecurity (ENISA) (2023) Artificial intelligence and cybersecurity
research
20. European Union Agency for Cybersecurity (ENISA) (2017) Technical guidelines for the implementa-
tion of minimum security measures for digital service providers
21. Galal HS, Youssef AM (2021) Publicly verifiable and secrecy preserving periodic auctions
22. (2022) IDC worldwide semiannual artificial intelligence tracker. https://www.idc.com/
23. IPFS (2023) InterPlanetary File System (IPFS). https://ipfs.tech/
24. Ebenezer Isaac RHP, Reno J (2023) AI product security: a primer for developers ”.
25. Kaloudi N, Jingyue L (2021) The AI-based cyber threat landscape: a survey. ACM Comput Surv
53(1):Article 20. https://doi.org/10.1145/3372823 (34 pages)
26. Li J (2018) Cyber security meets artificial intelligence: a survey. Frontiers Inf Technol Electronic Eng
19:1462–1474. https://doi.org/10.1631/FITEE.1800573
27. Liang B et al (2017) Detecting adversarial image examples in deep networks with adaptive noise re-
duction. arXiv: 1705.08378 (https://arxiv.org/abs/1705.08378)
28. Lindell Y (2021) Secure multiparty computation. Commun ACM 64(1):86–96. https://doi.org/10.1145/
3387108
29. Lohn A (2020) Hacking AI: a primer for Policymakers on machine learning cybersecurity
30. Mahmud H et al (2022) What influences algorithmic decision-making? A systematic literature review
on algorithm aversion. Technol Forecast Soc Change 175:121390
31. Mamoshina P et al (2017) Converging blockchain and next-generation artificial intelligence technolo-
gies to decentralize and accelerate biomedical research and healthcare. Oncotarget 9(5):5665–5690.
https://doi.org/10.18632/oncotarget.22345
32. Markovic M et al (2021) The accountability fabric: a suite of semantic tools for managing AI system
accountability and audit
33. Meng X et al (2017) MCSMGS: Malware classification model based on deep learning. In: Proceed-
ings of the international conference on cyber-enabled distributed computing and knowledge discovery,
pp 272–275
34. Mueck MD, Elazari Bar OA, Du Boispean S (2023) Upcoming European regulations on artificial intel-
ligence and cybersecurity
35. Nassar M et al (2019) Blockchain for explainable and trustworthy artificial intelligence. https://doi.org/
10.1002/widm.1340
36. Neumann V et al (2023) Examining public views on decentralised health data sharing. PLoS ONE
18(3):e282257. https://doi.org/10.1371/journal.pone.0282257
37. Ramirez MA et al (2022) Poisoning attacks and defences on artificial intelligence: a survey. arXiv
preprint arXiv:2202.10276
38. Ramos S et al (2021) A great disturbance in the crypto: understanding cryptocurrency returns under
attacks. Blockchain Res Appl 2(3):100021. https://doi.org/10.1016/j.bcra.2021.100021
39. Ramos S, Mélon L, Ellul J (2022) Exploring blockchains cyber security techno-regulatory gap. An
application to crypto-asset regulation in the EU ”. In (27 Pages Posted: 22 Jul 2022)
K
20 International Cybersecurity Law Review (2024) 5:1–20
40. Regulatory Framework for Artificial Intelligence. European Commission Digital Strategy. https://
digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
41. Saad M et al (2021) Exploring the attack surface of Blockchain: a systematic overview
42. Dock (2023) Self-sovereign identity. https://www.dock.io/post/self-sovereign-identity
43. Shinde R et al (2021) Blockchain for securing AI applications and open innovations. J Open Innov
Technol Mark Complex 7(3):189. https://doi.org/10.3390/joitmc7030189
44. Short AR et al (2020) Using blockchain technologies to improve security in federated learning systems.
In: 2020 IEEE 44th annual computers, software, and applications conference (COMPSAC) Madrid,
pp 1183–1188 https://doi.org/10.1109/COMPSAC48688.2020.00-96
45. Taddeo M, Floridi L (2018) How AI can be a force for good. Science 361:751–752. https://doi.org/10.
1126/science.aat5991
46. Tolpegin V et al Data poisoning attacks against federated learning systems. Georgia Institute of Tech-
nology
47. Tufail S, Batool S, Sarwat AI (2021) False data injection impact analysis in AI-based smart grid. In:
SoutheastCon 2021, pp 1–7 https://doi.org/10.1109/SoutheastCon45413.2021.9401940
48. Evasion Attacks on Machine Learning or Adversarial Examples. Towards Data Science. https://towar
dsdatascience.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1
49. Wang Y et al (2023) Adversarial attacks and defences in machine learning-powered networks: a con-
temporary survey
50. Xin Y et al (2018) Machine learning and deep learning methods for cybersecurity. IEEE Access
6:35365–35381
51. Yampolskiy RV, Spellchecker MS (2016) Artificial intelligence safety and cybersecurity: a timeline of
AI failures
52. Yerlikaya FA, Bahtiyar S (2022) Data poisoning attacks against machine learning algorithms. Expert
Syst Appl 208:118101. https://doi.org/10.1016/j.eswa.2022.118101
53. Zhang C, Wu C, Wang X (2020) Overview of blockchain consensus mechanism. In: Proceedings of
the 2020 2nd international conference on big data engineering BDE 2020, Shanghai. Association for
Computing Machinery, pp 7–12 https://doi.org/10.1145/3404512.3404522
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.