Ifc Report 18
Ifc Report 18
No 18
Governance and
implementation of
artificial intelligence in
central banks
2024 survey conducted by the Irving Fisher Committee on
Central Bank Statistics (IFC)
April 2025
Contributors to the IFC report 1
Rafael Schmidt
Olivier Sirello
BIS & Irving Fisher Committee on Central Bank Statistics (IFC) Bruno Tissot
© Bank for International Settlements 2025. All rights reserved. Brief excerpts may be reproduced or
translated provided the source is stated.
1
Respectively, Economist ([email protected]); Head of IT, Monetary and Economic Department ([email protected]);
Statistical Analyst ([email protected]); Head of BIS Statistics and Head of the IFC Secretariat ([email protected]); and
Economic-Financial Analyst ([email protected]).
The views expressed are those of the authors and do not necessarily reflect the views of the BIS, the CBC, the IFC, its members
or the other institutions mentioned in this report.
The authors are grateful for comments and suggestions from Juan Pablo Cova, Robert Kirchner, Michael Machuene Manamela,
Alberto Naudon and participants at the 4th IFC-Bank of Italy Workshop on “Data Science in Central Banking” (February 2025).
They also thank Márcia Cavalinhos, Nicola Faessler and Ilaria Mattei for editorial support.
Contents
Executive summary ........................................................................................................................................................................... 1
1. Introduction .................................................................................................................................................................................... 2
Box C: Cloud services and AI in central banks: balancing benefits and risks................................................... 17
5. A roadmap to promote innovation in the evolving data and technological landscape ................................ 20
References .......................................................................................................................................................................................... 26
Against this setting, exploring AI has become strategically important for central banks, as
highlighted by a recent survey of IFC members. They appear to be rapidly embracing innovative data
science techniques in their activities, including for economic research and statistics. They are in particular
actively experimenting with generative AI to enhance various tasks, such as information retrieval, computer
programming and data analytics.
Yet, below the surface, many central banks are still in the initial adoption phase. This raises the
central question of how AI can be effectively and responsibly used in their production processes.
A first lesson from central banks’ experience is that the deployment of this technology must
be complemented by adequate governance, which is taking shape gradually in most jurisdictions. Thus
far, AI projects are primarily implemented in a decentralised way. This is certainly important to tailor
applications to user needs, but it may create difficulties in terms of coordination and risk mitigation. In
particular, there are significant concerns about privacy protection, cyber security weaknesses, skills
shortage and ethical biases – topics that are arguably best addressed comprehensively at the overall
institution level. To tackle these issues effectively, central banks can benefit from their long-established,
robust and extensive experience with data management and governance, especially in the context of their
official statistical functions.
A second lesson is that the implementation of AI presents several trade-offs in terms of IT
infrastructure. One reflects the pressing need to access more computational power, which can be costly,
especially in terms of IT infrastructure. Cloud services could be one solution, but their use may remain
constrained by security and sovereignty concerns. Another important issue relates to the choice of closed
versus open source AI models. While the former may be easier to deploy and maintain, the latter can be
more cost-effective and less dependent on external vendors. A further trade-off is the deployment of in-
house solutions versus off-the-shelf products, each of them featuring advantages and disadvantages in
terms of customisation of applications, security, implementation costs and deployment rapidity.
Finally, another important feedback is that leveraging innovation effectively calls for making
further progress on more “traditional” data management issues. This reflects the fact that AI-
generated outputs intrinsically depend on the quality of their underlying data inputs – the well known
“garbage in, garbage out” principle. From this perspective, making the most of AI opportunities requires
improving the various phases of the data life cycle, from production, validation, integration and storage
to dissemination and use.
Key priorities looking ahead could include (i) curating the quality of data and metadata to
ensure their transparency, traceability and machine readability; (ii) improving the global data infrastructure
by enhancing data access and adequate sharing and exchange of best practices among relevant
stakeholders; (iii) developing modern, metadata-driven and standardised data processes and systems; and
(iv) advancing user literacy in AI and general data issues.
In recent years, data science has driven remarkable innovations offering new tools and techniques for
analysing large and complex data sets (IFC (2023a)). Among these, AI, coupled with the growing availability
of computational power, has great potential to reshape central banks’ activities (BIS (2024)).
Clearly, AI in central banking is not new. This technology refers to “a machine-based system
that […] infers, from the input it receives, how to generate outputs” (OECD (2024a)). As such, it enables to
leverage the increasing availability of computational resources to address tasks that traditionally require
human intervention (FSB (2017)). Central banks have implemented AI in their workflows for many years
already (IFC (2015a)). An example is their application of machine learning (ML) – a subset of AI based on
the use of complex algorithms trained on vast amounts of data – which can perform big data analytics,
with notable benefits for economic analysis, research and statistics (IFC (2021a, 2022)).
But the recent advent of generative AI – a class of AI that learns the patterns and structure of
input content such as text, images, audio and video (“training data”) and generates new but similar data
without requiring much user expertise – has pushed the adoption of this technology even further.
Specifically, the rise of large language models (LLMs) has enabled to generate human-like sentences
thanks to natural language processing (NLP) techniques that capture the relationships between words. A
growing number of central banks have begun experimenting with these new tools, ranging from chatbots
to assistants for coding and data analytics (Araujo et al (2024), Kwon et al (2024)).
Despite its potential benefits, the embrace of AI raises a number of questions.
First, a prominent issue relates to its responsible and safe use. This technology is not free of
limitations and risks, ranging from the generation of inaccurate, erroneous or biased outputs to the risk of
increasing dependency on third-party providers. These issues could easily damage the reputation and
credibility of central banks in their roles of producing reliable information and conducting evidence-based
policies. Dealing with these challenges clearly puts a premium on developing well established and
transparent principles ensuring adequate risk management and governance (CGRM (2025)). Fortunately,
central banks can capitalise on their expertise in managing data as producers of official statistics, in
particular to enhance their quality, availability, usability, integrity and security across the organisation (IFC
(2021b), UNECE (2024a)). Concretely, a set of key practices and rules have been developed to ensure
transparent and adequate data ownership, adherence to common standards, third-party reviews and audit
tracks – with the essential foundation provided by universally accepted principles such as the Fundamental
Principles of Official Statistics (UN (2014)).
Second, from a more operational perspective, the development and deployment of AI may
require the adaptation of existing IT systems. One issue is access to sufficient computational resources.
Another is ensuring the availability and reliability of data platforms to store, manage and analyse the
wealth of both structured and unstructured data, which keeps expanding rapidly with the digitalisation of
the economy. A related concern has to do with the adoption of cloud services, which can be a cost-effective
– if not unique – way to perform complex big data analytics but which is not without drawbacks. Reflecting
the above, AI implementation is characterised by various trade-offs, not least in terms of security,
performance and scalability.
Another important topic for central banks is how to adapt to the evolving data and
technological landscape, which arguably calls for a better global data infrastructure based in particular
on well established statistical standards, methodologies, identifiers and registers. Priority tasks appear to
involve fostering better data access, sharing and collaboration; modernising data platforms; and
disseminating high-quality, standardised and machine-readable data and metadata, for instance by
Central banks have long explored data science innovations, including AI. Yet the recent advent of
generative AI – and in particular LLMs – has renewed interest in how to harvest the benefits of this
innovative technology that can be more easily accessed by non-specialists. As a result, resources are
increasingly being mobilised to implement various use cases across the many areas of central banking.
However, full-scale adoption appears to have remained relatively low and mostly limited to pilot
projects so far.
Central banks are expressing a growing interest in AI according to the IFC survey, which gathered
responses from 60 jurisdictions across all continents (Graph 1.A). Perhaps more notably, a large number
of central banks view AI and ML as a strategic issue (Graph 1.B), with almost half of the respondents ranking
it as a top priority, especially in Asia.
Clearly, there are many reasons why central banks place AI high on their agenda. Versatility
is a first key motive, as this technology can support a broad range of central bank-specific activities,
spanning from statistical compilation to macroeconomic and financial monitoring (Araujo et al (2024)). A
common use case is to extract quantitative insights from textual information, such as the discussions by
the Federal Open Market Committee (FOMC) of the US Federal Reserve Board (Dunn et al (2024)). Another
use case relates to new analytical capabilities, for example to identify specific patterns in data such as
market dysfunction episodes (Aquilina et al (forthcoming)). Other reasons are that AI can significantly
augment programming capabilities (Gambacorta et al (2024)) 3 and help automate existing workflows,
paving the way for increased efficiency and productivity, especially for administrative tasks (Spencer
(2024)).
2
See Annexes 1 and 2 for the survey questions and the list of respondent IFC jurisdictions, respectively. Responses have been
collected between September and November 2024.
3
Mirroring the broader experiences within society, as AI is increasingly used in the public and private sector (Crane et al (2025)).
45
42
Africa
6
Consistent with the priority given to AI exploration, the vast majority of central banks are
actively discussing its use cases (Graph 2.A). Meanwhile, they are also concretely increasing the resources
allocated to this area (Graph 2.B). Almost half of the respondents intend to invest at least 5% of their
budget in AI/ML projects in the next three years, with a few planning to invest more than 10%. This marks
an important change compared with current financial plans, where the share allocated to AI initiatives is
generally less than 5%. Certainly, these expected budget increases may also reflect the anticipated high
cost of running the new applications to be developed. Yet, cheaper solutions are emerging, with costs
expected to go down over time, not least thanks to declining prices in IT computing power and the
growing availability of freely accessible open models supporting data science (Araujo et al (forthcoming)).
AI is an important topic for discussion in central banks, with a budget set to grow
In per cent of respondents Graph 2
80
33 60
40
62
20
0
3 years ago Today 3 years ahead
Extensively Moderately Not discussed
Less than 5% 5–9.9% 10–19.9%
Cyber security
Statistics
Research
Financial stability
Payments
Prudential supervision
Monetary policy
0 20 40 60 80
High Moderate Low No impact Not sure
1
Share of the expected impact from AI/ML (from “high” to “not sure”) per each functional domain in the next two years.
Turning to the central banks’ statistical function, AI is also expected to bring important
benefits for both data producers and users (UNECE (2023, 2025a)). The survey confirms that innovative
tools can greatly support statistical compilation – in particular for data analysis and processing, at least on
an exploratory basis (Graph 4.A). 4 Reported use cases also include identifying outliers and generating
synthetic data (Graph 4.B), an important topic for central banks seeking to facilitate access to their micro
data sets while safeguarding sensitive information (Brault et al (2024), Drechsler and Haensch (2023)).
In contrast, central banks expect AI to have a milder – though still promising – impact on
their core policy mandates. First, a number of applications can support key tasks in the monetary policy
areas, for instance inflation forecasting (Araujo et al (2024)) and macroeconomic modelling (Kase et al
(2025)). Regarding financial stability, AI can help sift through textual information to better anticipate
episodes of macro-financial distress (FSB (2024)) and, at the microprudential level, to support regulatory
compliance and risk-based supervision (Crisanto et al (2024), Dohotaru et al (2025)) or develop stress tests
(Petropoulos et al (2022)). Finally, AI is also expected to impact other areas of central banking such as
payments supervision. For example, it can help detect fraud (Desai et al (2024)) or monitor money
laundering in real time (BISIH (2024)).
The survey also shows that virtually all surveyed central banks already use generative AI tools
(Graph 5.A), in particular to generate text and code (Graph 5.B). Another top use case is AI chatbots – a
common software application designed to converse with users – to assist with various tasks, for instance
to extract, summarise, translate or draft documents (Handa et al (2025)). Some central banks have
4
Cf IMF’s StatGPT and TalkToManuals chatbots that aim to modernise data processing, dissemination and discoverability for
producers and users; see Kroese (2024) and Ribarsky (2025).
The use of AI/ML by central banks’ statistical functions mostly relates to data
processing and analysis
Number of responses Graph 4
A. Statistical applications of AI/ML across the phases of B. Large variety among the reported use cases
the data life cycle1
30
20
10
1
For the main sequences of the production of official statistics as described in the Generic Statistical Business Process Model (GSBPM).
A. Almost all reporters use generative AI… B. …especially for text and code generation
12
40
30
20
88
10
Another key highlight from the survey is that the potential of AI has yet to be fully explored, as the
development of actual applications has remained relatively limited to date.
First, there is a gap between expectations and projects actually implemented across the
main functional domains of central banking (Graph 6.A). In theory, one would expect that a higher
anticipated impact would correlate with a higher number of applications. Yet, in practice, this is not always
true. For example, in cyber security, the number of reported use cases is trailing expected impact, possibly
reflecting significant barriers to implementation, including IT and staff resources constraints (Aldasoro et
al (2024)). Other areas with a low number of concrete projects despite high hopes include payments and
microprudential supervision. In contrast, the development of AI-based applications has been quite marked
– and even sometimes surpassing expectations – in statistics and economic research, in particular for
nowcasting, sentiment analysis and outlier detection (Graph 6.B).
A. Expected impact and current applications of AI/ML1 B. Reported AI/ML use cases by application scope2
Normalised scores, 1–5 % of responses
Financial stability 7
8
Monetary policy Cyber security
32
22
Payments Communication
1 2 3 4 5
30
Research
Economic modelling and sentiment analysis
Expected impact Information generation/search, including chatbots
Current use cases Data-related operations, including outlier detection
Productivity
Other
Cyber security
1
Expected impact is calculated as the average of the responses rated on a scale from 1 to 5 (1 = not sure; 2 = not impactful at all; 3 = slightly
impactful; 4 = moderately impactful; 5 = highly impactful). The number of current use cases is presented normalised on a scale from 1 (min)
to 5 (max). 2 Pilot or ongoing use cases also included. Respondents could indicate more than one answer.
Central banks are mostly exploring AI/ML applications, with limited use for day-
to-day operations1
In per cent of respondents Graph 7
12
25
33
30
1
Percentage of respondents indicating each state of AI/ML adoption (“In production” = multiple use cases deployed in production;
“Small-scale production” = limited use cases in production; “Development” = few pilot projects; “No AI/ML in use” = no projects either in
production or development).
3. Enhancing governance
As adoption grows, central banks are increasingly recognising the importance of establishing clear
principles governing the way data are managed and securely used to support multifaceted AI-based
applications. In particular, a key insight from the IFC survey is that central banks have been actively working
on tailoring their existing data governance and management to the specificities of AI to both mitigate its
risks and promote innovation efficiently.
With a growing number of AI applications being considered, it is growingly felt that adequate governance
should be set up to fully harness their benefits while also mitigating the many risks. This reflects the
fact that the AI technology can be associated with hazards of various types, raising concerns over cyber
activity, invasive surveillance, privacy infringements and opacity (OECD (2024c)).
Box A
Implementing AI governance
This box summarises surveyed central banks’ experiences in operationalising AI governance, focusing on four core
components, namely data governance, organisation, rules and risk management:
1. Data governance involves adequate documentation to clarify policies, standards and procedures, inter alia,
that ensure data quality – especially in terms of accuracy, transparency and provenance, security and
responsible use. Common solutions typically include establishing comprehensive data management systems,
documenting data (eg through metadata on the sources, version and curation), maintaining asset inventories
(eg data catalogues) and metadata registries, using standards and enabling efficient data exploration for
both humans and machines, for instance through application programming interfaces (APIs).
2. Organisational structures are very often interdisciplinary and range from dedicated centralised solutions
(eg steering committees/commissions, programme management offices, multi-modal structures) to hub-
and-spoke systems (ie overall network with multiple functional hubs) and fully decentralised approaches
(eg governance set up at the level of the business lines). Accountabilities for AI can also be assigned to
existing roles (eg chief data/information/technology officer) and/or units (eg IT department) or, on the
contrary, to newly created functions. Some central banks have also adopted more informal organisational
setups, such as communities of practice or cross-functional networks.
3. Guidelines and policies commonly include terms of use for AI systems, including their modalities, interfaces
and data (eg models, third-party dependencies, upstream/downstream data). Survey’s responses also refer
to guidelines for responsible AI use, privacy standards, explainability requirements and, more broadly,
international principles such as the Fundamental Principles of Official Statistics. In addition, specific attention
is put on documenting the methods and/or processes to regularly assess AI outputs for accuracy, reliability
and fairness (eg human oversight, automated evaluation, reviews of input contents).
4. Risk management frameworks are critical to assess risks and conduct ongoing monitoring of AI systems.
Reported solutions include regular audit of AI systems for vulnerabilities, evaluation of the robustness of
safety measures, the development of incident response procedures, business continuity plans and processes
to promote and maintain skills (ie to prevent erosion of knowledge due to overreliance on AI systems). They
are typically implemented by leveraging existing risk management frameworks (such as the three lines of
defence, “3LoD”).
A key highlight of the survey is that the governance of AI appears to be still in its early phase
in many central banks. Only about a third of the respondents have a policy for the use of AI and a very
tiny minority of them have made it publicly available (Graph 8). In contrast, the majority is either only just
starting to develop AI policies or simply does not have any plans to do so for the moment. This trend is
especially pronounced in emerging market economies (see Box B). A key reason for this could be the gap
between the high speed of innovation, on the one hand, and the slower pace of designing and formalising
institutional arrangements, on the other hand (OECD (2024c)). The latter typically takes time to develop,
as it involves several stakeholders and must be consistent with available resources and strategic priorities.
32
30
33
The survey shows that central banks are well aware of the risks posed by AI. Indeed, they rank the need
to address the associated operational risks at the top of their priorities (Graph 9.A). Specifically, they see
data privacy and cyber security as primary issues (Graph 9.B), recognising the importance of safeguarding
their information in secure IT environments.
Fortunately, central banks have developed a number of solutions to mitigate such risks
effectively, thanks to their long-standing experience in managing data. First, they often have established
comprehensive risk management frameworks, which can be instrumental in defining AI risk profiles,
evaluating projects and protecting information accordingly (CGRM (2025)). 5 A typical example relates to
information classification rules, which can be easily adapted from existing data governance frameworks to
address AI/ML risks specifically, for instance for dealing with data breaches or information leakages.
A second solution is to restrict the use of AI tools. This appears to be a popular solution among
central banks for confidentiality reasons, as indicated by three quarters of the respondents (Graph 10.A).
In practice, such restrictions can be implemented in various ways, ranging from web filtering solutions to
block specific external domains to the prohibition of AI tools in restricted areas. Another often cited
solution is to permit the use of AI services with confidential information only within separate and secure
environments, such as on-premises infrastructure often without internet access.
5
The setup of such risk management frameworks can benefit from existing guidelines, such as those produced by the US
National Institute of Standards and Technology (NIST); see NIST (2024).
A. Skills shortage and addressing risks are key barriers B. Privacy, cyber security and biases are top concerns1
Number of responses Normalised scores, 1–5
Skills shortage Privacy
Addressing risks IT costs Cyber security
System integration
Talent retention
Technological solutions Reputation Bias
Budget
Institutional agility
1 2 3 4 5
Regulations
Intellectual Explainability
Culture property
Ethics
Leadership
0 5 10 Regulations Expertise
Dependency
Institutional Operational Technological
Human resources
1
Normalised scores from 1 to 5 (1 = not sure; 2 = not impactful at all; 3 = slightly impactful; 4 = moderately impactful; 5 = highly impactful).
Restrictions and policies to ensure the safe and ethical use of AI in central banks
A. Most central banks have restrictions on AI tools B. Boards of directors often oversee responsible AI
40
13
30
12
20
75
10
A third option is to design adequate policies to deal with AI-specific issues. A key focus has
been ethical issues, such as biases and the lack of explainability (Graph 9.B), reflecting the importance
given to using trustable AI models (Ali et al (2023)) as well as to securing the fairness, quality and
transparency of the information managed by central banks to support their mandates. In practice, the
survey shows the board of directors plays a key role at the organisational level in overseeing and raising
awareness about AI ethical issues. It is supported in this endeavour by existing policies, practices and
principles (Graph 10.B), with due consideration of widely recognised international standards supporting
data governance, such as the Fundamental Principles of Official Statistics (Willis-Núñez and Ćwiek (2022)).
A first result is that development and deployment of AI are equally important priorities for central
banks irrespective of their location. Specifically, the survey shows a strong interest in this topic among EMEs,
especially in Asia and Oceania (Graph B1.A). This also materialises in significant budget commitments: around half of
EME respondents anticipate investing at least 5% of their financial resources in AI/ML in the next three years, against
less than a quarter of AEs (Graph B1.B).
However, as regards AI governance frameworks, around half of EME respondents appear to lack
dedicated guidelines, whereas most AEs have begun developing them (Graph B1.C). Moreover, many EMEs still had
no intention to develop any AI policies at the time of the survey.
Another highlight is that only a small share of EME respondents have set up an overarching AI
coordination function (28% versus 52% for AEs). They have been taking a highly decentralised approach to the
management of AI projects (including pilot projects), reported to be the case for 72% of them (versus 63% for AE
respondents; Graph B1.D).
Lastly, the diversity of the above picture underlines the value for central banks in AEs and EMEs to share
experiences and best practices in advancing AI implementation.
EME central banks highly value AI, but their governance appears less developed
In per cent of respondents Graph B1
A. There is strong interest B. …which is also confirmed C. …but most of them are D. …while AI projects are
in AI from all central banks, by expected large budget not developing AI managed in a
especially in Asia… increases…1 policies… decentralised way
40 80
80
16
60 30 60
12
40 20 40
72
20 10 20
0 0 0
AEs EMEs AEs EMEs
No Centralised
Currently elaborating Coordinated by the CDO
Normal High Decentralised
Yes, internally available
Yes, publicly available
1
Percentage of respondents which expect to invest at least 5% of their budget in AI/ML projects in the next three years.
Sources: IFC survey on AI and ML (2024); authors’ calculations.
AEs here comprise Australia, Canada, Denmark, euro area jurisdictions, Japan, Norway, Sweden, Switzerland and the United States.
EMEs include all other IFC respondents (see Annex 2).
While their primary focus is often risk mitigation, comprehensive governance frameworks can also help
coordinate and foster innovation. A key reason is that they can provide a common approach to
addressing complex AI-related business needs more consistently and effectively, by developing
synergies and avoiding overlaps and/or inconsistencies.
Thus far, the survey shows that around two thirds of AI/ML use cases have been implemented
on an ad hoc basis by the various business areas in central banks, compared with only a fifth managed
by a dedicated unit (Graph 11.A). Such a decentralised approach certainly provides greater agility and
speed in implementing tailored solutions that are relevant to user needs, reducing the risk of having overly
ambitious plans that consume resources without delivering concrete business value. However, the
heterogeneous proliferation of use cases across the organisation can also be sub-optimal. For one, the
lack of coordination can hinder the sharing of experiences between business areas. In addition, AI-related
risk management practices may not be sufficiently supervised, and the strategic objectives set at the
organisation level might not be properly enforced.
A. AI/ML projects are mostly B. …suggesting that bank-wide AI C. …although central functions such
developed at the level of business governance structures are still in as the CDO can help coordinate and
areas… development… promote innovation
12
20
25
38
12
39 62
68 14
10
Centralised outside IT
Centralised by a specific unit Defence/regulatory/efficiency
Centralised within IT
Coordinated by the CDO Offence/growth/innovation
Decentralised
Developed by business areas Not implemented
Other
The above trade-offs suggest that there can be merit in having either a centralised or federated
structure for governing AI across the various business areas. Indeed, the survey shows that 39% of the
respondents have established an AI-specific central structure (Graph 11.B), often assigning it to functions
outside the IT department, such as the chief data officer (CDO) or digital innovation officer (Graph 11.C).
On the other hand, a smaller share of central banks coordinate AI projects through a more federated
approach, that is by delegating several responsibilities to individual business areas within an overall
comprehensive framework (UNECE (2024a)). However, this overall picture may be changing quickly looking
ahead, considering that more than half of the central banks reported having not yet established any AI
governance function at the time of the survey.
Most of the large AI projects involve external staff and last less than two years
Number of responses1 Graph 12
<3
≥ 3 and < 5
1
The size of the flow represents the overall count per collected use case. 2
Team size refers to staff full-time equivalents (FTEs).
Indeed, a related lesson from the survey is that AI projects typically entail strong partnerships,
with almost half of them spanning over at least three departments (and often involving IT or statistics
teams; Graph 13). Such synergies can be facilitated by a comprehensive governance framework to
3
6
41
15
26
IT, statistics and other Data Science Hub Other than statistics and IT
IT Statistics and other non-IT IT and statistics
The opportunities offered by AI are a good reminder of the importance of modernising business and IT
processes to drive higher automation, productivity and performance. Yet, while central banks have already
gained substantial experience in developing comprehensive IT and data platforms, the integration of AI
systems into their existing infrastructures appear to have generated multiple challenges, with the need to
address important trade-offs.
One prominent highlight of the survey is that AI can be instrumental in strengthening a wide range of
central bank processes, starting with IT tasks (Graph 14.A). As computer programming is often very
structured and repetitive in nature (Chui et al (2023)), there is ample scope for improving productivity
through greater automation, for instance in coding through software copilots (Kuhail et al (2024), Moradi
Dakhel et al (2023)). 6
A second key area for central banks is data management and processing. AI and ML have
proved to be helpful in streamlining many time-consuming phases of the data life cycle, for instance by
automating data collection, transformation (“data wrangling”) and quality checks. They can also greatly
support data analysts by providing real-time insights on large and/or complex “big data” sources and
tapping into novel data types (IFC (2023a)). In particular, generative AI, especially NLP-based tools and
LLMs, can be instrumental in converting unstructured textual information – which is typically heavily used
in central banking – into structured inputs, allowing for enhanced analysis.
Third, albeit to a lesser degree, the survey suggests that AI can also have a positive impact on
the wider range of central banks’ administrative and communication activities (IFC (2023b),
UNECE (2025a)). For instance, generative AI can help optimise text-based routine tasks such as translation,
summarisation or information extraction. AI-powered solutions can also support communication
specialists to design and produce non-textual material, such as images, audio and video, in an efficient
and more creative way (Graph 14.B).
6
A software coding copilot is a type of AI-based tool designed to assist software developers (much like a copilot in an aircraft)
in writing code, for instance through suggestions and auto-completion. While they have many advantages, copilots can raise
issues in terms of licences and software copyrights, cyber attacks and the risk of inadequate business controls, especially when
used without any human involvement.
The integration of AI into existing IT systems can be a complex and challenging process, not least because
of the risk of disrupting current solutions. It also poses major technical, financial and operational issues,
including the need for massive computational resources and scalable data platforms.
First, the development and deployment of AI-based tools may require substantial
investments in high-performance computing infrastructure, especially in terms of advanced graphics
card processing. 7 Other challenges include soaring energy consumption and large carbon footprints
(Wang et al (2024), Garg et al (2025)). Yet there are ways of reducing the costs related to compute-
intensive tasks. A prominent one is to rely on model inference to generate content based on a pre-trained
model instead of training a new model, as the latter requires heavy computing resources. Another is to
rely on cloud services, which may enable the use of customised hardware on demand, thus facilitating
greater computation without significant in-house investments (see Box C).
Another significant issue is the need for scalable data platforms, for at least two reasons. First,
AI applications require powerful IT infrastructures to be able to process vast amounts of data in real time,
while also ensuring reliability, availability and performance. Second, they also necessitate platforms that
handle unstructured data types, such as text from news, social media, earnings reports or trading
communications. As a result, a growing number of central banks are investing in modern data storage
solutions, including NoSQL databases 8 and cloud-based storage (IFC (2020)).
A. AI’s largest impact is expected to be in coding and B. Automation and productivity are the most anticipated
analytics benefits
Automation
4
3 Productivity Collaboration
2
1 2 3 4 5
Creativity
1
Normalised scores from 1 to 5 (1 = not sure; 2 = not impactful at all; 3 = slightly impactful; 4 = moderately impactful; 5 = highly impactful).
7
Examples include central processing units (CPUs), graphics processing units (GPUs), tensor processing units (TPUs) and neural
processing units (NPUs, also known as AI accelerators or deep learning processors). AI/ML applications may also require highly
performant and scalable configurations, such as multi-node clusters for distributed computing.
8
For instance, to deal with unstructured information with such “not only SQL” databases.
Over the past few years, central banks have progressively expanded their use of cloud services, benefiting from their
flexible, modular and scalable solutions. With the growing adoption of AI, the cloud has gained even greater traction
as a cost-effective means of meeting the demand for more computing power, for instance for high-performance
graphics processing units (GPUs).
Cloud services, also abbreviated to “cloud”, refer to a model of delivering computing resources, software
applications and data storage over the internet, on demand and on a pay-as-you-go basis, allowing users to access
and utilise computing resources without the need for physical infrastructure or maintenance. There are multiple
models of cloud deployment. On the one hand, clouds can be public whenever their services are open for use by
everyone, like those offered by Alibaba, Amazon Web Services, Google, IBM and Azure. On the other hand, private or
community cloud services can be used exclusively by specific organisations, offering greater autonomy and control
over sensitive data and infrastructure. Between these two extremes, there are also hybrid forms which feature the
scalability of public clouds while also enabling the stringent security and compliance functionalities of private ones.
Clouds can also be categorised into three groups depending on the level of management and control
offered to clients. First, software as a service (SaaS) is a fully managed service, with the third-party provider
responsible for all aspects of the application, from installation to maintenance and upgrades. By contrast, infrastructure
as a service (IaaS) allows clients to manage and configure virtualised computing resources such as servers, storage
and networking. Finally, platform as a service (PaaS) falls in between, offering a managed platform for developing,
running and managing applications, where the client controls the application and data but not the underlying
infrastructure.
With the spread of AI applications in central banks, outsourcing computational infrastructure to the
cloud has become a pressing issue. A key reason for this is that the cloud enables the use of customised hardware
on demand, thus facilitating greater computing power without significant in-house investments. A second advantage
is scalability, which makes it possible to add computational power while keeping costs relatively low. Indeed, the
survey shows that one third of respondents are using the cloud for regular business activities, including as an
architectural choice for AI/ML applications (Graph C1.A). Furthermore, most cloud services enhance operational
resilience and facilitate software updates and maintenance, thus ensuring the latest features and security patches.
However, the journey towards the cloud does not come without challenges. A key concern is data
privacy and security, especially when operations are outsourced to third-party vendors. Further, cloud services may be
targeted for cyber attacks due to their high-profile nature, possibly undermining the protection of central banks’
sensitive data. Another important issue is data sovereignty. Use of the cloud may entail loss of control over one’s
data, especially when hosting facilities are not bound by commercial agreements and/or are located outside the
central bank’s reach, potentially raising legal and geopolitical risks. Another concern is the high market power
concentrated in few providers, as limited competition could hinder the ability to negotiate contracts and lead to unfair
pricing practices.
Reflecting the trade-offs above, central banks appear in practice to prefer on-premises cloud, allowing
them to manage their infrastructure in private data centres. The survey shows that two thirds of respondents either
do not allow (26%) or are only experimenting with (39%) external providers (Graph C1.A), while more than 40% of
respondents prefer on-premises self-deployed solutions (Graph C1.B and C1.C).
See IFC (2020). See UNECE, Cloud for official statistics, March 2024; for an overview of public cloud use, see Gartner, “Gartner
forecasts worldwide public cloud end-user spending to surpass $675 billion in 2024”, press release, 20 May 2024. See
OMFIF (2023). See N Haefner, V Parida, O Gassmann and J Wincent, “Implementing and scaling artificial intelligence: A review,
framework, and research agenda”, Technological Forecasting and Social Change, vol 197, December 2023. See S Verhulst,
“Operationalizing digital self-determination”, Data and Policy, vol 5, no e14, April 2023. See United Nations Commission on
International Trade Law, Notes on the main issues of cloud computing contracts, 2019.
A. Access to cloud services is being B. …but on-premises solutions are C. …although several options are
progressively adopted… the top architectural choice for being evaluated by central banks
AI/ML…
% of responses % of responses Number of responses
40
26 30
39
20
35 10
The above opportunities and challenges imply that central banks may face a number of trade-offs when
deciding on their IT infrastructure, namely in-house versus generic market solutions and security versus
performance.
The first trade-off is whether to develop customised software solutions or use off-the-shelf
products. The former, typically developed in-house, are arguably better tailored to user requirements but
may require significant investment in terms of resources, expertise and time. By contrast, off-the-shelf
applications curated by third-party vendors are often quicker to deploy, less costly to implement and
easier to maintain, yielding significant advantage for rapidly evolving AI-based applications. However,
proprietary solutions come with limitations, including third party dependency – with potential
over-reliance of the customer on the vendor for maintenance, updates and support – as well as reduced
scalability, adaptability and flexibility to address specific business cases. Hence, a key objective is to strike
a balance between these often conflicting objectives. For one, developing a modular and interoperable
architecture can reduce dependency by allowing the organisation to integrate various components from
multiple vendors. Moreover, the use of open source software (OSS) can enhance flexibility to meet
business needs and reduces the risk of vendor lock-in (Box D).
Balancing security with performance is the second key trade-off when it comes to the
implementation of AI solutions in central banks. On the one hand, ensuring the security of AI systems, for
instance through encryption and access control techniques, is essential when handling sensitive
information. On the other hand, strict security measures can lead to excessive latency, low processing
Box D
Like any software, AI models can be either open source or closed source. The choice of one approach over the
other has significant implications for their development, deployment and maintenance.
On the one hand, closed source applications refer to software or models whose underlying code and
architecture are proprietary and only accessible to developers or owners. Commercial vendors typically provide
support, maintenance and updates. In exchange, users are often required to adhere to licensing agreements and may
be limited in their ability to customise the solution.
On the other hand, open source approaches can comprise the software framework used to build and deploy
AI models. They can be used for any purpose and can be inspected or modified without restrictions. For example,
models such as BERT and RoBERT are freely available and can be adapted for various NLP tasks. Another key advantage
is the limited development cost, as open source models offer greater flexibility, portability and transparency,
particularly because everyone can scrutinise their specifications. However, they also present some notable limitations.
They often require more expertise and resources to develop and maintain, as users are responsible for ensuring the
integrity and security of the code. Moreover, knowledge about possible vulnerabilities may be more easily accessible
to malicious actors.
Reflecting the above trade-offs, central banks have in practice adopted a hybrid approach,
implementing both open and closed source AI models (Graph D1.A). The survey suggests that they tend to favour
Central banks use both open and closed AI models and prefer open source ones
for cost and dependency reasons
In per cent Graph D1
A. Both closed and open source B. Cost-effectiveness and lower C. Python is the top programming
models are used dependency are critical factors in language for AI/ML
choosing open source AI
30
6
9 5
14
20 16
14 63
10 73
0
Python Python and R
Both open source and closed source
R Other
Closed source
Open source
No model used
Turning to programming languages, central banks clearly report favouring the use of open source
ones. Specifically, Python appears to be the top choice for AI/ML applications for almost three quarters of use cases,
sometimes in combination with other languages such as R, Java and Julia (Graph D1.C). Python’s popularity can be
attributed to its simplicity, readability and flexibility as well as the extensive range of tools and libraries available.
Noteworthy examples include TensorFlow, Keras and PyTorch, which can support the design, development and
deployment of a wide range of AI-related applications, from data analysis and ML to automation and web
development.
According to opensource.org, “Open Source AI is an AI system made available under terms and in a way that grant the freedoms
to: use the system for any purpose and without having to ask for permission; study how the system works and inspect its
components; modify the system for any purpose, including to change its output; and share the system for others to use with or
without modifications, for any purpose”. See E Seger et al, Open-sourcing highly capable foundation models: an evaluation of
risks, benefits, and alternative methods for pursuing open-source objectives, Centre for the Governance of AI, September 2023.
Making the most of the opportunities provided by AI calls for establishing a clear roadmap ahead,
with four main areas of focus: (i) securing the quality of the statistical information; (ii) improving the
global information infrastructure, especially as regards adequate data access and sharing as well as
collaboration in terms of best practices; (iii) developing modern, metadata-driven and interoperable data
management processes; and (iv) advancing user literacy in data science and AI (Graph 15).
As AI-generated outputs are shaped by the quality of their input data, securing and enhancing the
quality of statistical information appear to be key priorities. In practice, however, AI is mostly trained on
vast data reservoirs that are poorly documented and that originate from non-authoritative sources outside
of the rigorous standards of official statistics. Hence, there is a risk that poor data quality and scarce
metadata can misdirect the training of models and, potentially, drive AI to generate misinformation
(Garrett (2024), Sirello et al (forthcoming)). The survey suggests that central banks could play a useful role
on this front by acting as reference “data curators” along with the other institutions responsible for official
statistics.
First, they have been leading efforts to strengthen information documentation, that is the data
about the data, or “metadata”. Concretely, this means using information standards, version control,
persistent identifiers and OSS formats (Gottron and Suranyi (2025), US DoC (2025)). Data and metadata
should also be open, findable, accessible, reusable and interpretable, following the FAIR data principles
(UNESCO (2023), Wilkinson et al (2016)).
A second, important consideration is to make the information machine-readable, as users may
increasingly access it indirectly through AI systems or machines (UNECE (2025b)). In addition to better
standardised data and metadata information, this calls for disseminating AI-relevant information to help
both developers and users to better understand the sources and underlying assumptions of the models
used (Mitchell et al (2019), McMillan-Major et al (2024)). 9 Another option is to ensure that dissemination
tools such as data portals can be easily scraped and crawlable to feed algorithms and models.
Ultimately, central banks have also an interest in securing the quality of information in the
broader data ecosystem, not least because of the growing volume of private data feeding into AI systems.
As producers of official statistics, they have a long-standing track record in disseminating high-quality
data following universally recognised principles. 10 Along with statistical offices, they can leverage this
experience to play a data curator role in the broader data ecosystem, for instance by setting guidelines
and identifying, monitoring and closing data gaps (Križman and Tissot (2022)). This may also call for setting
up a global data framework to improve the availability and reuse of high-quality data in the AI age (UN
HLAB-AI (2024)).
Improving the global data infrastructure through adequate data access, sharing and
collaboration
AI is fundamentally a data-intensive technology. Thus far, most models have been trained on publicly
available information, including official and non-public sources such as the internet. Yet, these resources
are finite, and some data providers are increasingly restricting access to their data (Villalobos et al (2022)).
This raises the risk of data bottlenecks and, ultimately, of a “tragedy of the data commons” (Jones (2024),
Verhulst (2024), Longpre (2025)).
Against this background, facilitating data access and adequate sharing clearly emerges as
priority moving forward. Fortunately, central banks have extensive experience in this endeavour at
different levels (IFC (2023c)). Within the organisation, they have set up comprehensive approaches to
managing their data assets, for instance through master data management systems, common metadata
repositories and data catalogues. They have also facilitated interoperability through statistical standards
9
For example, the European Union Regulation (EU) 2024/1689 encourages developers of AI models “to implement widely
adopted documentation practices, such as model cards and data sheets, as a way to accelerate information-sharing along the
AI value chain, allowing the promotion of trustworthy AI systems”; see eur-lex.europa.eu.
10
For example, see the Fundamental Principles of Official Statistics (UN (2014)).
Box E
Forging partnerships with private stakeholders can benefit central banks willing to access new information
sources and leverage AI techniques. A first reason for this is that a large share of alternative data – especially
unstructured data – are in the hands of private actors. Their access can complement official statistics or cope with
sudden episodes of “statistical darkness” during unforeseen events. It can also help to effectively reduce the reporting
burden for data reporters. Lastly, public authorities have a keen interest in engaging with private providers of AI-
based solutions to ensure that their data inputs are of sufficient quality.
In practice, there are already examples of successful public-private collaboration to use official data
sources and in turn improve the quality of AI systems. In fact, several central banks, such as the US Federal Reserve,
have worked on setting up adequate frameworks for using private information sources for their own statistical
production. Another noteworthy initiative is the Data Commons, a project led by Google to provide a unique
platform for accessing official statistics, such as those supporting the UN Sustainable Development Goals. This can
help to better link correctly vetted databases publicly available on the internet as well as train AI models more
accurately, thus reducing hallucination risks. Another example is the direct collaboration of central banks with
financial actors. For instance, the Bank of England has launched an initiative to identify how AI could be used in UK
financial services while mitigating risks. This echoes other initiatives ongoing at the global level, such as the one by
the Financial Stability Board.
Yet partnerships with the private sector may also raise a number of challenges. One limitation is the
volatility of private data sources. When their access is granted for free, this often occurs on a voluntary basis and can
be easily interrupted rapidly. Even in the presence of commercial contracts, there is a risk of excessive vendor lock-in,
in addition to the potentially high costs involved. Private sources may also have various methodological limitations –
such as biases – as they are usually not bound by the strict quality standards of official statistics.
The above suggests that public authorities have a keen interest in better collaboration with private
initiatives in the field of data and AI. Clearly, official statisticians should have a leading role in this endeavour, as
they oversee the enforcement and monitoring of reference data and statistical standards at the national and
international levels. Case in point, the UN Committee of Experts on Big Data and Data Science for Official Statistics is
working on a global programme on emerging data and technologies such as AI. Additionally, the UN High-Level
Advisory Body on AI is providing guidance on the development of AI governance as well as common data
frameworks.
See BIS (2024). See P Gennari, “Data equity and official statistics in the age of private sector data proliferation”, Statistical Journal of
the IAOS, vol 40, no 4, November 2024, pp 757–64 and De Beer and Tissot (2021). See Križman and Tissot (2022). See S Verhulst,
“Unlock the hidden value of your data”, Harvard Business Review, May 2020 and Fraisl et al (2024). Cf the DBpedia project (J Lehmann, R
Isele, M Jakob, A Jentzsch, D Kontokostas, P Mendes, S Hellmann, M Morsey, P van Kleef, S Auer and C Bizer, “DBpedia – A large-scale,
multilingual knowledge base extracted from Wikipedia”, Semantic Web, vol 6, no 2, January 2015). Since 2024, the Data Commons has been
largely used to train LLMs such as DataGemma; cf P Ramaswami and J Manyika, “DataGemma: Using real-world data to address AI
hallucinations”, The Keyword, September 2024. See Bank of England (2024). See FSB (2024). See UNSC, “Revised terms of
references of UNCEBD and its task teams”, Background document, no 3(q), Fifty-sixth session, March 2025. See UN HLAB-AI (2024).
11
An example is the National Data Library initiative in the United Kingdom; see Tobin (2024).
A. Sharing knowledge, code and use cases is a priority…1 B. …yet most central banks do not share AI/ML code or
models2
Normalised scores, 1–5 % of responses
Workshops
9
Sharing
Co-investments 15
use cases
56
1 2 3 4 5 20
Pre-trained Collaboration
models with experts
1
Normalised scores from 1 to 5 (1 = not sure; 2 = not impactful at all; 3 = slightly impactful; 4 = moderately impactful; 5 = highly impactful).
2
Respondents could indicate more than one answer (“Not shared” = no code is shared outside or within the central bank; “Shared privately”
= code is shared within the central bank only or with similar national authorities; “Shared publicly” = code is shared with the public, including
through the institution’s website).
12
See Recommendations 13 and 14 of the third phase of the G20 DGI; see IMF et al (2023, 2024) and imf.org.
13
For instance, open AI systems are typically free to use, inspectable, editable and shareable; see opensource.org.
Making the most of AI opportunities calls for further modernising current data management
processes, not least to facilitate the use of novel data types and the linking of multiple information sources.
For central banks, this first implies further efforts to adapt their existing data and IT
infrastructures to store, integrate and protect diverse information types, for instance in dedicated
data lakes and hubs (Graph 17.A). 14 Such platforms, combined with metadata-driven solutions such as
common metadata registries and data inventories, can facilitate data access for both internal and external
users, including researchers. They can also ensure greater automation, adequate cyber security and
protection of sensitive information (Graph 17.B). Additionally, the establishment of single access points
and common data spaces offers another avenue to improve data access, reuse and sharing, as shown by
the European data strategy (European Commission (2024a,b)).
Second, the demand for pooling large and complex data sets may accelerate central banks’ move
towards cloud-based services (IFC (2020), OMFIF (2023)). These can be a cost-effective solution to meet
and tailor computational needs to the requirements of AI projects. Yet, they also present some important
trade-offs, especially in terms of security, dependency and development of internal knowledge, which
central banks must carefully weigh against their specific constraints (see Box C above).
Third, managing vast amounts of data also calls for enhanced interoperability of data processes
and systems. In practice, this implies adopting statistical standards and data formats such as SDMX across
the data life cycle, including for micro and geospatial information (IFC (2025a)). Relatedly, promoting
unique identifiers and linking registers at the national and global levels can further support
interoperability. Lastly, implementing application programming interfaces (APIs) may also facilitate the
automation of processes and enable seamless integration, communication and exchange across various
information systems.
Generative AI, data quality, platforms and cloud are the top priorities ahead
Number of responses Graph 17
A. Generative AI, data platforms, data quality and cloud B. Other priorities also include quantum computing and
as priorities in the near future cyber security
50
40
30
20
10
14
Case in point, the IMF Big Data Center launched in 2024 to consolidate available data sets and fully leverage them for AI
(Georgieva (2024)).
Supporting the development of data science innovations such as AI calls for building adequate in-house
expertise. Certainly, central banks can already draw on their experience in analysing, managing and
producing data. They are also able to leverage their multidisciplinary teams, which are typically proficient
in IT tools and statistical and data techniques (Araujo et al (2023)). They also plan to further pursue their
efforts to recruit adequate profiles such as cyber security specialists, data scientists and engineers
(Antonucci et al (2023)). Yet, attracting and retaining AI- and data-savvy talent may pose several
challenges. These may include the global shortage of adequate skills and related fierce competition in the
labour market as well as a reported lack of attractiveness and limited career prospects in central banks
(see Graph 9.A above).
Fortunately, there are many options. One is to increase the flexibility of sourcing and working
arrangements. Relying on contractors can be a worthwhile solution given that AI-related work is often
time-bound and skill-specific. Indeed, the survey shows that most central banks’ projects involve external
consultants, especially in larger teams (see Graph 12 above). Another solution is to offer specialised career
tracks and promote internal knowledge by setting up AI-specific training curricula for staff. 15 Moreover,
establishing communities of practice can be helpful to share knowledge within the organisation, among
central banks and with external stakeholders, such as academic institutions. Initiatives at the international
level include, for example, IFC data science activities (IFC (2025b)) and the newly set up BIS Innovation
hubs (BIS (2024)).
More broadly, the increased ability of wider audiences to use innovative techniques underscores
the need to educate users and ensure adequate AI literacy. Generative AI applications in particular
have empowered almost everyone to use complex analytical techniques, for instance through simple
chatbots. While beneficial, this trend may present a number of challenging developments, including
algorithmic opacity, a poor understanding of AI systems and, ultimately, a gradual erosion of knowledge
due to overreliance on AI (Garrett (2024), UNECE (forthcoming)). Against this setting, options could include
developing programmes to support adequate skills as well as offering methodological guidance and
tailored communication to users to advance AI awareness and literacy for the public good.
15
Cf the ECB action plan to foster AI skills and ensure that the technology is used safely (Cipollone (2024)). Along the same lines,
the Bank of England offers AI fluency programmes for its data experts on top of basic AI literacy courses (Benford (2024)).
Aldasoro, I, S Doerr, L Gambacorta, S Notra, T Oliviero and D Whyte (2024): “Generative artificial
intelligence and cyber security in central banking”, BIS Papers, no 145, May.
Ali, S, T Abuhmed, S El-Sappagh, K Muhammad, J Alonso-Moral, R Confalonieri, R Guidotti, J Del Ser,
N Díaz-Rodríguez and F Herrera (2023): “Explainable artificial intelligence (XAI): What we know and what
is left to attain trustworthy artificial intelligence”, Information Fusion, vol 99, no 101805, November.
Antonucci, L, A Balzanella, E Bruno, C Crocetta, S Di Zio, L Fontanella, M Sanarico, B Scarpa, R Verde and
G Vittadini (2023): “Data science skills for the next generation of statisticians”, Statistical Journal of the
IAOS, vol 39, no 4, November, pp 773–82.
Aquilina, M, D Araujo, G Gelos, T Park and F Pérez-Cruz (forthcoming): “Harnessing artificial intelligence
for financial market monitoring”, BIS Working Papers.
Araujo, D, G Bruno, J Marcucci, R Schmidt and B Tissot (2023): “Data science in central banking: applications
and tools”, IFC Bulletin, no 59, October.
Araujo, D, S Doerr, L Gambacorta and B Tissot (2024): “Artificial intelligence in central banking”, BIS Bulletin,
no 84, January.
Araujo, D, S Nikoloutsos, R Schmidt and O Sirello (forthcoming): “Open sourcing central banks”, IFC
Guidance Note.
Bank for International Settlements (BIS) (2024): “Artificial intelligence and the economy: implications for
central banks”, Annual Economic Report 2024, Chapter 3, June, pp 91–127.
BIS Innovation Hub (BISIH) (2024): “Project Aurora: the power of data, technology and collaboration to
combat money laundering across institutions and borders”, press release, 27 March.
Bank of England (2024): Artificial Intelligence Consortium – Terms of reference.
Benford, J (2024): “TRUSTED AI: Ethical, safe, and effective application of artificial intelligence at the Bank
of England”, speech given at the Central Bank AI Conference, London, 25 September.
Brault, J, M Haghighi and B Tissot (2024): “Granular data: new horizons and challenges for central banks”,
IFC Bulletin, no 61, July, pp 1–24.
Chui, M, E Hazan, R Roberts, A Singla, K Smaje, A Sukharevsky, L Yee and R Zemmel (2023): The economic
potential of generative AI: the next productivity frontier, June.
Cipollone, P (2024): “Artificial intelligence: a central bank’s view”, speech at the National Conference of
Statistics on official statistics at the time of artificial intelligence, Rome, 4 July.
Consultative Group on Risk Management (CGRM) of the BIS Representative Office for the Americas (2025):
Governance of AI adoption in central banks, January.
Crane, L, M Green and P Soto (2025): “Measuring AI uptake in the workplace”, FEDS Notes, February.
Crisanto, J, C Benson Leuterio, J Prenio and J Yong (2024): “Regulating AI in the financial sector: recent
developments and main challenges”, FSI Insights on policy implementation, no 63, December.
De Beer, B and B Tissot (2021): “Official statistics in the wake of the Covid-19 pandemic: a central banking
perspective”, Theoretical Economics Letters, vol 11, no 4, August.
Desai, A, A Kosse and J Sharples (2024): “Finding a needle in a haystack: a machine learning framework for
anomaly detection in payment systems”, Bank of Canada Staff Working Paper, no 2024-15, May.
AI Artificial intelligence
API Application programming interface
BERT Bidirectional Encoder Representations from Transformers
BIS Bank for International Settlements
BISIH BIS Innovation Hub
CDO Chief data officer
CGRM Consultative Group on Risk Management
DGI Data Gaps Initiative
EU European Union
FSB Financial Stability Board
G20 Group of Twenty
Gen AI Generative artificial intelligence
GPT Generative pre-trained transformer
GPU Graphics processing unit
GSBPM Generic Statistical Business Process Model
IFC Irving Fisher Committee on Central Bank Statistics
IMF International Monetary Fund
IT Information technology
LLM Large language model
ML Machine learning
NIST National Institute of Standards and Technology
NLP Natural language processing
NSO National statistical office
OECD Organisation for Economic Co-operation and Development
SDMX Statistical Data and Metadata eXchange
TPU Tensor processing unit
UN FPOS United Nations Fundamental Principles of Official Statistics
UN HLAB-AI United Nations High-Level Advisory Board on Artificial Intelligence
UNECE United Nations Economic Commission for Europe
1. On a scale from 1 (low priority) to 3 (high priority), how important is AI/ML to your
institution’s strategic goals over the next two years?
Low
Normal
High
2. On a scale from 1 (not discussed) to 3 (extensively), how much does your institution
formally discuss the topic of AI/ML for internal usage?
Not discussed
Moderately
Extensively
3. What percentage of the total budget of your institution is intended to finance AI/ML
projects? (Optional)
More
Less than
5–9.9% 10–19.9% 20–39.9% than
5%
39.9%
3 years ago
Today
3 years ahead
B. Expectations
4. Please rate the impact you expect from AI/ML in the following functional domains in the
next two years: (please select one option per row among the following: highly impactful;
moderately impactful; slightly impactful; not impactful at all; not sure)
Monetary policy
Statistics
Research
Financial stability (except prudential supervision)
Prudential supervision
Oversight of payments systems
Cyber security
Other
6. Please rate the impact you expect from AI/ML in the following aspects in your institution in
the next two years: (please select one option per row among the following: highly impactful;
moderately impactful; slightly impactful; not impactful at all; not sure)
Operational efficiency and productivity
Collaboration and knowledge-sharing
Automation
Creativity and innovation
Cost efficiency
Engagement with external users
Other
8. In which functional domains does your institution use AI/ML? (Please select all that apply)
Monetary policy
Statistics
Economic research
Financial stability (except microprudential supervision)
Prudential supervision
Oversight of payments systems
Cyber security
Communication
Other
10. Please describe (i) the main projects currently under way in your institution using AI/ML
(including pilot projects); (ii) the main data sources used; (iii) the purpose of the projects;
(iv) the platform and application used; (v) time invested; and (vi) team involved (eg fully
external, mix of internal/external, 100% dedication, mixed with regular business)
(i) Project (ii) Data (iii) (iv) Type of (v) (vi) (vii)
name sources Purpose activity Platforms/applications Time Team
11. Please describe the same elements for the future projects planned for the next two years
(Optional)
(i) Project (ii) Data (iii) (iv) Type of (v) (vi) (vii)
name sources Purpose activity Platforms/applications Time Team
13. If yes, please select the applicable use cases (Please select all that apply)
Text generation (using ChatGPT; Copilot; others)
AI transcript services (used in meetings)
Generative AI interface to community (chatbots; search tools)
Code generation, debugging and documentation
Non-textual content generation (images, audio, video)
Other
14. Does your institution currently have recommendations/guidelines for the use of AI?
Yes, publicly available
Yes, internally available
Currently elaborating
No
17. If your institution has a CDO, what is the primary focus of his or her function? (Optional)
Offence/growth/innovation
Defence/regulatory/efficiency
Other
18. What is the state of data and AI responsibility and ethics in your institution? (Please select
all that apply)
Board of directors well versed in data and AI issues and responsibilities
Well established policies and practices in place
Industry has done enough to address data and AI ethics
Other
19. What are the biggest challenges to business adoption of AI/ML? (Please select all that apply)
Lack of institutional alignment/agility
Cultural resistance
Lack of staff with adequate skills
Difficulty retaining talent
Executive leadership
20. Which AI/ML risk does your institution consider relevant? (Please select all that apply)
Confidentiality and privacy
Intellectual property infringement
Inaccurate output (bias)
Cyber security
Lack of explainability
Lack of workforce expertise
Regulatory compliance
Institutional reputation
Dependency on external providers
Increase in IT-related costs
Other
21. Please describe measures taken or planned to mitigate those risks (Optional)
E. IT stack
23. Do you have access to any cloud infrastructure for AI/ML purposes? (if yes, please specify
below which technologies you are using) (Optional)
In regular business
For testing only
Not allowed
Other
25. When using generative AI and large language models (LLMs), what is the strategy for
choosing between closed models (eg OpenAI’s ChatGPT) vs open models (eg Meta’s Llama3)
(Optional)
Usually explore both closed and open models and use the model that performs best
Usually use closed models only
Usually use open models only
Generative AI and LLMs are not currently used
26. Do you have any restriction on internal, confidential or sensitive data in AI tools? (eg
separated environments for AI tools)
Yes
Not yet (ongoing work)
No
27. If yes, how are those restrictions carried out? (Please describe)
F. Collaborative strategies
28. What are the main avenues for central bank collaboration to address the challenges of AI
going forward? (Please select all that apply)
Workshops
Consolidated guidelines for the use of AI and best practices
Sharing user cases and code
Co-investment opportunities
Collaboration with AI/ML experts from other institutions
Access to pre-trained models (eg LLMs, Hugging Face)
Other
30. In your opinion, what will be the emerging technologies topics that will concentrate interest
and resources in the upcoming future? (Please select all that apply)
Generative AI
Data platform/data lake
Data quality/data health
Cloud migration
Other