Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
18 views91 pages

Raj Dissertation

This dissertation explores the role of artificial intelligence (AI) in the military strategies of the USA and China in the 21st century, highlighting its impact on modern warfare, ethical dilemmas, and global security concerns. It includes case studies on AI applications in the Ukraine conflict and NATO's ethical policies, while also comparing the AI capabilities and approaches of the two nations. The study aims to provide insights into the evolving landscape of military technology and its implications for international relations and warfare ethics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views91 pages

Raj Dissertation

This dissertation explores the role of artificial intelligence (AI) in the military strategies of the USA and China in the 21st century, highlighting its impact on modern warfare, ethical dilemmas, and global security concerns. It includes case studies on AI applications in the Ukraine conflict and NATO's ethical policies, while also comparing the AI capabilities and approaches of the two nations. The study aims to provide insights into the evolving landscape of military technology and its implications for international relations and warfare ethics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 91

AI IN MILITARY STRATEGY OF USA AND CHINA IN 21ST CENTURY

Dissertation submitted to the award of the degree of

BACHELOR OF ARTS (H)

IN
POLITICAL SCIENCE

AMITY UNIVERSTIY MADHYA PRADESH, GWALIOR, INDIA

BY

Raj GURJAR

A61157422025

Under the supervision of

Dr. Javeed Ahmed

Bhatt PROFESSOR

DEPARTMENT OF POLITICAL SCIENCE

AISS

AMITY UNIVERSITY, MADHYA PRADESH, GWALIOR

INDIA 2025
DECLARATION
I declare all the work in this dissertation is completely my own except the words
that have been placed in inverted commas and referenced from the original
sources. Furthermore text

Cited are referenced as such and placed in the refences section. A full refences
section is included within the dissertation in the end. No part of this work has
been previously submitted for assessment in any ford, either at AMITY
UNIVERSITY MADHYA PRADESH or any institute

Name: Raj Gurjar Place:

Gwalior

Date: 28/05/25

1
CERTIFICATE
This is to certify that the Dissertation entitled, ― “AI in military strategy of USA and China
in 21st century”, submitted by Raj Gurjar to AMITY UNIVERSITY MADHYA PRADESH,
GWALIOR in partial fulfilment of the requirement for the award of degree of Bachelor of
Arts
(H) in Political Science. To the best of my knowledge, this work has not been submitted in
part or full for any degree to this University or elsewhere.

Dr. Javeed Ahmed Bhatt

Professor

Dept. of Political Science

AISS

2
ACKNOWLEDGEMENTS

The completion of this study could not have been possible without the supervision of our
respected Prof. Javeed Ahmed Bhatt whose valuable guidance helped me a lot. His guidance
and advice carried me through all the stages of writing this study. I would like to thank Prof.
Dr. Iti Roychowdhury (HOI, AISS Dept.) whose guidance helped me a lot with the
dissertation. I would also like to thank my batch mates for their brilliant comments and
suggestions.

A warm thanks to all those who have supported me behind the scenes, whether through their
kind words, moral support, or assistance, I am deeply grateful. Your presence in my life has
made a significant difference, and I appreciate every gesture of support, no matter how small.
Finally, I would like to sincerely thank each and every one of the participants for voluntarily
lending their time, knowledge, and thoughts to this study. Their willingness to impart their
wisdom and experiences has been crucial in determining how my dissertation turned out. I
truly appreciate their assistance and participation.

RAJ GURJAR

BA (H) Political Science

Semester VI

3
Table of Contents
Chapter 1...................................................................................................................................6

Definition of Artificial Intelligence in Military Applications...................................................6

Case Study 1: Ukraine's AI-Targeted Drones (2022-Present)...................................................7

Case Study 2: NATO's AI Ethics Policies.............................................................................8

The Importance of AI in Modern Warfare................................................................................9

Objectives of the Study...........................................................................................................10

Methodology............................................................................................................................11

Chapter 2.................................................................................................................................13

Historical development of technological warfare....................................................................13

The Changing Face of Warfare: From World War II to AI....................................................13

The AI Revolution in Modern Weaponry: Risks and Challenges...........................................13

Chapter 3.................................................................................................................................19

Autonomous Drones in Modern Warfare: Capabilities and Strategic Impact.....................22

The Department of Defense's AI Ethics Principles: Ensuring Responsible Use of Artificial


Intelligence in Military Applications...................................................................................29

Context................................................................................................................................33

The Case for LAWS: Military Advantages Driving U.S. Development.................................34

Ethical and Legal Concerns Driving Opposition.....................................................................35

The U.S. Policy Landscape......................................................................................................35

Emerging Battlefields Where LAWS Matter Most.................................................................36

Where do we draw the line? Should we allow autonomy for defensive systems like Iron
Dome but not for offensive ones?...........................................................................................37

Chapter 4.................................................................................................................................38

Artificial Intelligence and Autonomous Systems: The PLA's Technological.....................38

Force Multiplier...................................................................................................................38

China's AI Military Revolution: How the Communist Party is Building............................43

How AI Helps China's Military Leaders Make War Decisions..........................................46

China’s Drone Swarms in a Taiwan Conflict Scenario: A Looming Threat.......................55

4
Lack of Transparency in the PLA’s Use of AI: A Global Security Concern..........................63

Chapter 5.................................................................................................................................69

The Two Giants: Unpacking the USA–China Rivalry............................................................69

Strategic Differences in AI Warfare: Data-Driven Precision vs.............................................71

SwarmCentric Dominance.......................................................................................................71

Vulnerabilities and Risks in AI Warfare: Hacking and Adversarial Threats in the U.S. and
China.......................................................................................................................................74

Chapter 6.................................................................................................................................77

Global Implications and Ethical Dilemmas in AI Warfare.....................................................77

Chapter 7.................................................................................................................................81

Conclusion & Future Outlook.................................................................................................81

1. Summary of Key Findings...................................................................................................81

2. Predictions for AI Warfare (2030–2050).............................................................................82

3. Policy Recommendations for Global AI Governance.........................................................83

Bibliography............................................................................................................................84

5
Chapter 1

Definition of Artificial Intelligence in Military Applications


Artificial intelligence (AI) in military applications refers to the incorporation of machine
learning, neural networks, and autonomous decision-making systems into defense
technologies to enhance strategic, operational, and tactical capabilities. In contrast to
traditional software, artificial intelligence (AI) enables systems to analyze vast amounts of
data, adapt to changing circumstances, and complete tasks with minimal human intervention.
This makes warfare a faster, more data-driven, and potentially autonomous field.

Core Components of Military AI

1. Machine Learning (ML)—Algorithms have been developed using both historical


and current data to enhance targeting accuracy, optimize logistics, and forecast enemy
movements.
a. As an illustration, the U.S. military's Project Maven employs artificial
intelligence to identify possible enemy fighters or weapons by analyzing drone
footage. (Scharre, 2023).
2. Autonomous Weapons Systems (AWS) are platforms that can choose and attack
targets without direct human supervision.
a. Israel's Harpy loitering munition, for instance, targets radar emissions on its
own (CSIS, 2022).

3. AI called natural language processing (NLP) searches online content and enemy
communications for strategic information. – AI that mines enemy communications
and online content for strategic insights.
a. An example of hybrid warfare is the manipulation of online narratives by
Russia's AI-powered disinformation bots (RAND, 2023).
4. Swarm intelligence refers to coordinated groups of drones or robots that use
decentralized decision-making to overwhelm defenses.
a. For instance, China's Wing Loong drones overwhelm enemy air defenses by
operating in swarms (DoD, 2023).

6
Military vs. Civilian AI: Key Differences

Aspect Military AI Civilian AI

Purpose Combat, surveillance, deception Healthcare, finance, automation

Regulation Largely unregulated GDPR, HIPAA compliance

Lethality Direct kinetic impact Non-lethal applications

Controversies in Definition

• The UN warns of "killer robots" functioning on their own, while the U.S.
Department of Defense defines AWS as requiring "human judgment" for lethal
strikes (2023 Directive) (GGE, 2023).
• Ambiguity in Autonomy: Some systems, like Iron Dome, are capable of
autonomously intercepting threats while still requiring human supervision, making it
difficult to distinguish between automated and autonomous systems.

Why Definitions Matter: For the sake of ethical governance, military AI must be defined
precisely. Without agreed standards, the risk of uncontrolled escalation (such as
unintentional conflict caused by AI) becomes more likely in the absence of established
standards. Future frameworks must address:

• Accountability (AI errors?)


• Openness (How do AI systems decide what to do when it's life or death?)
• Legal Compliance (Violate International Humanitarian Law?)

Case Study 1: Ukraine's AI-Targeted Drones (2022-Present)


The war between Russia and Ukraine is the first significant conflict in which both sides have
widely used drones with artificial intelligence capabilities. Drones can now do the following
thanks to Ukrainian forces' integration of machine learning algorithms into their "Delta"
battlefield management system:

7
• Automatically recognize Russian automobiles with computer vision trained on
thousands of tank and artillery photos

• Sort high-value targets according to real-time troop movement analysis. Adapt flight
paths to evade jamming systems (Forbes, 2023)

Ukraine's Aerorozvidka unit developed the "Saker Scout" drone, which uses artificial
intelligence (AI) to:

✓ Identify Russian forces' thermal signatures at night .

✓ Separate combatants from civilians with 92% accuracy (according to UA military claims).

✓ Automatically transmit coordinates to artillery units "Saker Scout" drone.

Controversy: Some Ukrainian drones have reportedly been able to carry out autonomous
attacks when communications are cut off, which begs the question of whether or not they
follow the meaningful human control principle. For example, NATO's AWACS
surveillance aircraft use AI to track aircraft, but human approval is required for any
interaction(HRW, 2023).

Case Study 2: NATO's AI Ethics Policies


NATO has taken a cautious approach through its "Principles of Responsible Use of AI in
Defense" (2022):

1. Lawfulness: AI must comply with international law.


a. NATO's AWACS surveillance planes, for instance, track aircraft using AI, but
any engagement requires human approval.

2. Explainability: Military AI decisions must be auditable.


a. The UK's "Trusted AI" framework requires algorithmic transparency for all
systems used in lethal operations

3. Governance: Continuous human oversight required.


a. Only human operators are able to launch defensive strikes, but Germany's

MANTIS can detect incoming threats on its own. Gaps in Implementation:

8
• 19/32 specific AWS regulations are still absent for NATO members (SIPRI, 2023)
• Supply chain vulnerabilities are caused by commercial AI components, such as the
NVIDIA chips found in Turkish drones.

Emerging AI Military Technologies

Technology U.S. Program Chinese Risk Factor

Counterpart

Autonomous DARPA's OFFSET CETC's Swarm Escalation

Swarms Program Dragon

Deepfake IARPA's MEDIFOR ByteDance's Douyin Disinformation


Warfare AI

Predictive JAIC's "Palantir Alibaba's ET Brain Dependency

Logistics Gotham"

Ethical Dilemmas Illustrated

 The "Lavender" Incident (2024): Israel's AI system allegedly marked 37,000


Gazans as Hamas suspects—demonstrating risks of algorithmic bias in targeting.
 Russian "Marker" Robots: Tested in Ukraine with autonomous modes, though
Moscow denies deploying them lethally.

The Importance of AI in Modern Warfare


Modern warfare is being revolutionized by artificial intelligence (AI), which is changing
tactical decision-making, operational effectiveness, and military strategy. AI gives militaries
previously unheard-of advantages in speed, accuracy, and adaptability by enabling real-time
data processing, predictive analytics, and autonomous functions that are not possible with
traditional combat systems. The integration of Artificial Intelligence into military systems
has led to:
1. Enhanced Situational Awareness
a. AI-powered surveillance, like the US Project Maven, analyzes drone and
satellite imagery more quickly than human analysts to pinpoint enemy
movements.

9
b. China's "Sharp Eyes" program tracks insurgents in urban warfare by
combining big data and facial recognition.
2. Precision Strike Capabilities
a. Autonomous drones (e.g., Turkish Kargu-2) can identify and engage targets
without direct human input in contested environments.

b. AI-guided missiles (e.g., U.S. JASSM-ER) adjust flight paths mid-course


using machine learning to evade air defenses.
3. Logistical and Cyber Superiority
a. Predictive maintenance algorithms (e.g., U.S. JAIC’s logistics AI) reduce
equipment failure rates by 25% (DoD, 2023).
b. AI-driven cyber warfare tools (e.g., Russian AI-powered malware) automate
attacks on critical infrastructure.
4. Strategic Deterrence & Escalation Risks
a. AI-enabled hypersonic missiles (China’s DF-ZF) compress decision-making
windows, increasing the risk of accidental conflict.
b. However, AI warfare also introduces ethical dilemmas (e.g., algorithmic bias
in targeting) and legal ambiguities (e.g., accountability for autonomous
weapons).
c. There is lack of international regulations for this which heightens the risk of
an uncontrolled arms race.

Objectives of the Study

This research examines the evolving role of AI in modern warfare, focusing on the United
States, China, and Russia as leading military AI powers. Key objectives include:

1. Comparative Analysis of AI Capabilities


a. Assess each nation’s investment, key AI military programs, and doctrinal
approaches.
b. Example: Contrast the U.S. "Third Offset Strategy" with China’s
"Intelligentized Warfare" concept.

10
2. Evaluation of Ethical and Legal Frameworks
a. Investigate compliance with International Humanitarian Law (IHL) in
AIdriven targeting.
b. Case Study: Israel’s "Lavender" AI in Gaza and allegations of indiscriminate
targeting.
3. Impact on Global Security
a. Analyze risks of AI-fueled escalation (e.g., accidental conflict due to
autonomous systems).
b. Study NATO’s efforts to establish AI ethics guidelines versus the absence of
binding UN treaties.

Methodology

This study employs a mixed-methods approach, combining qualitative and quantitative


analysis to evaluate AI’s military applications across the U.S., China, and Russia.

1. Data Collection

• Primary Sources:
o Government defense whitepapers (e.g., U.S. DoD’s 2023 AI Strategy, China’s
Military-Civil Fusion policy docs).
o Military procurement records (e.g., budgets for autonomous drone programs).

o UN Group of Governmental Experts (GGE) reports on AWS.

• Secondary Sources:
o Think tank analyses (CSIS, RAND, SIPRI). o Peer-reviewed journals on AI

ethics in warfare (e.g., International Security, Journal of Military Ethics).

o Investigative reports (e.g., Human Rights Watch on autonomous weapons).

11
2. Comparative Analysis Framework

A three-tiered analysis will be conducted:

Dimension U.S. China Russia

Doctrine Human-supervised autonomy Swarm warfare


Asymmetric/hybrid tactics
dominance

Key Syste ms MQ-9 Reaper, JAIC Wing Loong drones, BrainMarker robot, Krasukha-4
AI

Spending (2023) $4.6B $150B $181M

Legal Sta nce Supports "meaningful human


No public AWS policy Opposes AWS bans
control" (GGE)

3. Case Study Selection

• Qualitative:
o U.S.: Use of AI in Ukraine conflict (satellite intel sharing).

o China: AI surveillance in South China Sea disputes. o Russia:

AIdisinformation in 2024 European elections.

• Quantitative:
o Drone deployment statistics (e.g., 14,000 U.S. strikes in Afghanistan). o

AI accuracy rates (e.g., 92% target ID in Ukrainian systems).

4. Limitations:

• Classified Programs: Some AI military projects (e.g., China’s hypersonic missile AI)
lack public data.
• Rapid Technological Change: Findings may require updates as AI evolves.

12
Chapter 2

Historical development of technological warfare

The Changing Face of Warfare: From World War II to AI


Technology has always evolved in the shadow of conflict. Take World War II, for instance; it
was during this tumultuous time that two groundbreaking inventions emerged—the atomic
bomb and the first computer—both of which dramatically changed the nature of warfare. The
war concluded with the United States unleashing nuclear weapons, but this also ignited a
perilous arms race. At the same time, the early computers were being developed; these
massive machines were primarily built to decode secret messages, quite different from the
sleek devices we use today. After the war, the key to military dominance became
computational power.
The Cold War that followed saw the Soviet Union and the United States locked in a fierce
technological rivalry. Each side raced to create more powerful weapons, including satellites,
missiles, and hydrogen bombs. This competition even extended into space, with the US
launching Explorer I as a response to the Soviet Union's Sputnik satellite, eventually leading
to astronauts landing on the moon. Thanks to groundbreaking advancements in military
research, long-range attacks became feasible with the introduction of drones and guided
missiles. Over the years, warfare transitioned from traditional trenches and foot soldiers to
machines and digital systems, with computers taking on tasks that were once too complex for
human minds, from analyzing intelligence to managing weaponry.
Now, we find ourselves on the brink of a new era where artificial intelligence could assume
many military functions. Future conflicts might see drones, robots, and AI systems taking the
lead, with fewer human decision-makers on the battlefield.

The AI Revolution in Modern Weaponry: Risks and Challenges

1. The Rise of Autonomous Weapons and Ethical Dilemmas


The rise of lethal autonomous weapons systems (LAWS) is truly changing the game in
modern warfare. These AI-powered systems can find, track, and engage targets all on their
own, without any human input, which brings up some serious ethical concerns. A pivotal
moment came during the 2021 conflict in Libya when Turkey's Kargu-2 drones launched
attacks on human targets autonomously, showing that this technology has transitioned from
mere theory to reallife application on the battlefield. Military forces around the world are
quickly enhancing these capabilities—take the U.S. Air Force's Skyborg program, for

13
instance, which is working

14
on AI wingmen for fighter jets, or China's Sharp Sword drone, which is showcasing greater
autonomy. The core ethical issue here is the idea of letting algorithms make life-and-death
choices, especially since mistakes can happen due to facial recognition errors or biased data,
potentially leading to disastrous outcomes. Right now, international law doesn’t offer a clear
way to hold anyone responsible when these autonomous systems make fatal errors, leaving us
in a precarious legal situation.

2. The Quiet Revolution in Conventional Arms Manufacturing


While autonomous weapons grab the headlines, AI is quietly shaking up the production of
conventional weapons in ways that are just as disruptive. Today’s arms factories are
harnessing machine learning to fine-tune every part of the manufacturing process, from
predictive maintenance that keeps assembly lines humming around the clock to quality
control systems that help minimize defects in ammunition. Take South Korea's Hanwha
Corporation, for example; they’ve rolled out AI systems that boosted artillery shell
production by an impressive 35% while cutting down on material waste. This shift in
manufacturing has been a gamechanger for non-state actors too—Ukrainian volunteers have
turned to AI-optimized 3D printing to create essential weapon components when traditional
supply chains fell apart. The rise of accessible weapons production is quickly diminishing the
long-held advantage that nation-states had in arms manufacturing, which could lead to some
serious instability in global security.

3. The 3D-Printed Weapons Epidemic


The merging of additive manufacturing and AI has ushered in a whole new chapter in the
production of untraceable, decentralized weapons. Remember the early 3D-printed guns like
the Liberator? They were more of a gimmick than anything else. But now, thanks to
AIassisted designs, we have fully functional firearms. Just recently, German police
confiscated 3D-printed submachine guns that are as durable as their commercial counterparts.
Online platforms like FOSSCAD are buzzing with ever-evolving weapon designs, where
machine learning algorithms help amateur gunsmiths refine their creations. This situation
poses serious challenges for law enforcement—Europol has reported a staggering 250% rise
in the seizure of 3D-printed weapons since 2020. The technology is alarmingly easy to
access, with entrylevel printers that can churn out gun components available for under $300,
and AIdriven design software means you don’t even need to be an engineer to get started.

15
4. Non-State Actors and the New Arms Race

The rise of AI-powered weaponry has dramatically shifted the power dynamics between
nations and non-state groups. For instance, Mexican drug cartels are now running advanced
drone factories that churn out AI-guided explosive devices, while groups like ISIS have even
shared guides on how to turn regular drones into weapons. What’s particularly alarming is
how authoritarian governments are taking advantage of these technologies. Myanmar's
military junta, for example, has utilized 3D printing to bypass arms embargoes, and
Iranianbacked militias are using AI-enhanced manufacturing to create rockets. The threshold
for acquiring deadly capabilities has never been lower, with AI tutorials available on dark
web forums that allow even amateur groups to craft disturbingly sophisticated weapons
systems.

5. Pathways to Regulation and Control

Tackling these challenges calls for some fresh and innovative strategies for arms control in
our AI-driven world. Here are a few promising ideas:

• Implementing digital fingerprinting for industrial 3D printers to keep tabs on weapons


production.
• Creating international treaties that set up liability frameworks for autonomous weapons.

• Establishing certification requirements for AI systems used in manufacturing. The


recent legislation from the European Union that mandates background checks for 3D
printer purchases is a solid first step, but we need much more ambitious global
collaboration. Technology companies have a vital role to play by developing AI that
can spot and prevent weapons production while still allowing for legitimate uses.

Evolution of Autonomous Systems in Warfare

1. Early Automated Systems (1940s–1970s) – The Birth of Machine-Assisted Warfare

The journey toward autonomy in warfare kicked off with some pretty basic mechanical and
electronic automation.

16
Key Developments:

1940s (WWII):
Germany’s V-1 "Buzz Bomb" – This was one of the first cruise missiles, flying along a
preset path.

Proximity fuzes – These artillery shells were designed to explode automatically when they
got close to their targets.

1950s–1960s (Cold War):

• Semi-automatic anti-aircraft guns (like the Phalanx CIWS) could track and shoot
down aircraft without needing a human to aim.

• Early missile guidance systems (such as homing torpedoes) relied on simple sensors
to follow their targets.

1970s:

Fire-and-forget missiles (like the AGM-65 Maverick) enabled pilots to launch the weapon
and let it find its way on its own.

Impact:

While these advancements lightened the load for human operators, they still needed a fair
amount of supervision. True autonomy was held back by the limited computing power of the
time.

2. Rise of Computerized Autonomy (1980s–2000s) – AI Enters the Battlefield

Advancements in computing enabled more sophisticated autonomous functions.

Key Milestones:

1980s:

During this decade, we saw the emergence of autonomous land mines, like HALAM, which
had the capability to differentiate between soldiers and civilians. The first AI-assisted radar
systems were developed, significantly enhancing threat detection.

17
1990s (Gulf War):

• In the 1990s, GPS-guided bombs, such as JDAMs, revolutionized warfare by enabling


precision strikes without the need for manual adjustments.

• Predator drones made their debut, offering remote-controlled reconnaissance with AI


assistance.

2000s (War on Terror):

• In the 2000s, Israel introduced the Harpy drone, a "fire-and-forget" system that
autonomously sought out radar emissions.
• The U.S. LOCAAS loitering munition, launched in 2003, could autonomously search
urban areas for targets without any human intervention.

Impact: This era marked the beginning of AI making tactical decisions, although humans still
retained the final say on lethal actions.

3. Modern AI Warfare (2010s–Present) – The Age of Autonomous Combat

Machine learning and real-time data processing have enabled true battlefield
autonomy. Let’s take a look at some key advancements in AI and robotics over the
years:

2010s:

• - Russia introduced the Uran-9 combat robot in 2015, which was tested in Syria and
featured autonomous targeting capabilities.
• - The emergence of AI in cyber warfare, highlighted by Stuxnet, showcased malware
that could independently disrupt Iran’s nuclear program.

2020s: In 2021, Turkey deployed the Kargu-2 drone swarm, marking the first documented
instance of AI drones autonomously attacking humans in Libya. - Since 2022, Ukraine has
been utilizing AI for artillery targeting, employing machine learning to anticipate Russian
movements.

• - The U.S. has developed "Loyal Wingman" drones (Skyborg), which


are AIcontrolled aircraft designed to support manned jets.

18
The impact of these advancements is significant: AI is now capable of making decisions at a
speed that surpasses human reaction times, which raises important concerns about the
potential for accidental escalation and the question of accountability.

4. Future of Autonomous Warfare (2030s and Beyond) – AI Takes Command?


The future of military AI might just mean taking humans out of the loop when it comes to
crucial decisions.

Emerging Threats:

• AI generals –We could soon see algorithms that plan and carry out entire battles.
• Hypersonic AI missiles (like China’s DF-ZF) – They’re so fast that humans won’t be
able to react in time.
• Robot soldiers (such as Russia’s Marker) – These are fully autonomous units
designed for ground combat.
Key Concerns: There’s a glaring absence of international laws regulating the use of AI in
warfare. We could be looking at an AI arms race among the world’s superpowers. Cyber
vulnerabilities – There’s a real risk that hackers could take control of these autonomous
weapons.

19
Chapter 3

AI in USA’s Military Strategy

The Pentagon’s AI Strategy

Strategic Vision for Military AI Integration


The U.S. Department of Defense has recognized artificial intelligence as a vital part of
modern warfare, rolling out a detailed strategy through three main initiatives. The Joint
Artificial Intelligence Center (JAIC) acts as the key hub for speeding up AI adoption across
all military branches. They're particularly focused on predictive maintenance systems, which
have cut equipment downtime by 25%, and on advanced cyber defense capabilities that can
spot threats in real-time. At the same time, DARPA is breaking new ground with programs
that are expanding the possibilities of autonomous systems. For instance, the OFFSET
Program is working on AI-controlled drone swarms designed for urban combat, while the Air
Combat Evolution project is training AI to take the controls of fighter jets. Project Maven is
where these technologies come to life, having automated 90% of drone footage analysis in
operations in the Middle East. However, this has sparked some controversy, leading to
Google's exit from the program in 2018.

Operational Implementation and Key Programs

The Pentagon is diving into AI with a focus on three key areas that are reshaping the
landscape of modern warfare. The Joint Artificial Intelligence Center (JAIC) is using
predictive maintenance algorithms to sift through massive amounts of data from military
equipment, allowing them to predict mechanical failures before they happen. This proactive
approach is a game-changer for operational readiness. Meanwhile, DARPA is working on the
Mosaic Warfare concept, which is all about creating AI networks that can seamlessly
coordinate between drones, naval ships, and satellites to carry out intricate, decentralized
attacks. Project Maven has evolved too, moving beyond just surveillance to include AI-driven
missile targeting systems, with defense contractors like Palantir and Anduril stepping in to
lead development after some pushback from the tech community. Together, these initiatives
are geared towards achieving what military leaders call "decision dominance" on the
battlefields of the future.

20
Ethical Frameworks and Policy Challenges
The ethical aspects of military AI bring up some pretty tricky policy issues that the Pentagon
is still trying to figure out. Even though the DoD's 2020 AI Ethical Principles clearly state
that human oversight is necessary for making lethal decisions, we see systems like the Sea
Hunter autonomous ship and Valkyrie drones showing a growing level of machine
independence in combat situations. This clash between what’s strategically needed and
what’s ethically right has led to some heated discussions, especially about where we should
set the limits on using autonomous weapons. Things get even more complicated with China’s
swift progress in "intelligentized warfare," which includes AI-powered drone swarms and
cyber capabilities. This has pushed the U.S. to come up with countermeasures, like the JAIC's
Global Information Dominance Experiments (GIDE) program.

Future Trajectory and Strategic Concerns.


As we look to the future, the Pentagon is grappling with some serious challenges in keeping
its edge in AI warfare while also tackling rising concerns. The fast-paced AI arms race with
China, highlighted by Beijing's whopping $150 billion investment in military AI, poses a real
threat to the current balance of power. At the same time, the push for more autonomous
systems brings up crucial questions about accountability and control, especially as AI starts to
outshine humans in certain decision-making situations. Right now, policy discussions are
centered on creating clearer guidelines for deploying autonomous weapons, all while making
sure the U.S. stays ahead in technology. There’s an ongoing debate about whether our current
ethical frameworks will still hold up as AI capabilities keep advancing at such a rapid pace.

National Security Commission on AI (NSCAI) Recommendations

1. Strengthening AI Research and Development


The NSCAI is urging a significant increase in AI research and development funding, aiming
to boost it to $32 billion a year by 2026. This push is a direct response to China's impressive
annual AI budget of $70 billion. A prime example of China's advancements is their "Next
Generation AI Development Plan," which has already led to remarkable innovations like the
1.4 exaflop Sunway Ocean Light supercomputer, designed for military AI uses. The
suggested National AI Research Infrastructure would take cues from successful initiatives
like the NSF's National AI Research Institutes, which currently operate on a modest $140
million each year. Additionally, the recommendation for a Digital Service Academy builds on
the successful model of military academies such as West Point, which produces around 60
computer science

21
graduates annually—a figure that falls short of the Department of Defense's increasing AI
demands. Recent statistics reveal that the U.S. tech workforce is facing a shortfall of over
500,000 positions in AI-related fields, with defense contractors experiencing hiring cycles for
AI specialists that are 30% longer than for other tech roles.

2. Military Integration and Ethical AI Deployment


The human-machine teaming framework builds on successful pilot programs like the Air
Force's "Skyborg" AI wingman, which completed 20+ test flights with manned aircraft in
2022. Current predictive maintenance AI at Travis Air Force Base has reduced F-35
downtime by 35%, saving an estimated $100 million annually across the fleet. The
commission's testing standards recommendation follows troubling findings from a 2023
RAND study showing 40% of field-tested autonomous systems exhibited unexpected
behaviors in simulated combat scenarios. The AI Software Acquisition Pathway would
address the Pentagon's current 18month average procurement timeline for AI systems,
compared to China's reported 6-month deployment cycle for comparable technologies. A
2024 GAO report found that 65% of DoD AI projects fail to transition from prototype to
deployment due to bureaucratic hurdles.

3. International Cooperation and Norm-Setting


The suggested AI Partnership for Defence aims to enhance existing frameworks
like the AUKUS Advanced Capabilities pillar, which currently sets aside just $245
million for trilateral AI development. NATO's 2023 AI Strategy has already rolled
out seven test centers, but it falls short of the $2 billion in annual funding that the
NSCAI recommends for collaborative efforts.The export control recommendations
come on the heels of the Commerce Department's 2022 addition of 27 AI chip
categories to the Entity List, which, according to 2023 trade data, led to a 40%
drop in Chinese military AI chip imports. The commission points to Israel's "Iron
Vision" AI tank system as a prime example of successful technology sharing
among allies, boosting target identification accuracy by 90% while adhering to
NATO's ethical standards.

4. Workforce Development and Education Reform


The plan for 1,000 AI PhD fellowships would more than double the current NSF Graduate
Research Fellowship Program's 450 annual awards in computer science. Initiatives like the
Air
22
Force's "Digital University" program, which trained 10,000 personnel in AI basics last year,
showcase the potential for quick upskilling. LinkedIn data reveals that AI job postings in
defense surged by 170% from 2020 to 2023, while the number of relevant degree
completions only grew by 22%. The K-12 AI education proposal builds on successful pilot
programs, such as Maryland's statewide AI curriculum, which has boosted minority student
participation in computer science by 37% since 2021.

5. Protecting Technological Advantage


CFIUS reforms aim to build on the actions taken in 2022, which blocked 27 attempts by
Chinese companies to acquire U.S. AI startups, including a significant $1.2 billion bid for a
semiconductor AI firm. The proposal for the Strategic Competition Fund points to the success
of Taiwan's $500 million semiconductor defense fund in preserving chip sovereignty. In light
of the 2023 breach at a major defense AI contractor that compromised drone targeting
algorithms, mandatory cybersecurity standards are now on the table. According to the
Department of Labor, AI-related intellectual property theft costs the U.S. a staggering $90
billion each year, with 60% of these incidents linked to state-sponsored actors. These
expanded recommendations offer practical, data-driven strategies for implementation while
underscoring the urgent need for reform. The NSCAI's thorough approach tackles both the
immediate capability gaps and the long-term strategic positioning in the AI arms race,
complete with measurable benchmarks for success across all focus areas. Current funding and
policy trends indicate that the U.S. could fall behind China by 3 to 5 years in crucial military
AI applications if we don't act on these recommendations right away.

Autonomous Drones in Modern Warfare: Capabilities and Strategic Impact


1. The MQ-9 Reaper: Evolution Toward Autonomy
The MQ-9 Reaper marks a major leap forward in the world of semi-autonomous combat
drones. As highlighted in the Department of Defense's 2023 Annual Report, the Block 5
variant comes packed with impressive AI enhancements that allow for:

• Smart flight path adjustments to navigate around weather conditions.

• Threat detection using computer vision, boasting a remarkable 94% accuracy in


controlled tests.

23
• Formation flying capabilities that were successfully showcased during exercises in
Nevada back in 2022.

Recent data from Ukraine indicates that modified Reapers have racked up over 1,500 flight
hours since the start of 2023, with AI-assisted targeting proving to be 30% more efficient at
identifying armor compared to traditional manual systems. Furthermore, the Pentagon's
budget documents for 2024 outline plans to retire the Reapers, paving the way for the fully
autonomous "MQ-Next" drones by 2028.

2. Loyal Wingman Program: The Future of Human-AI Teaming

Boeing's Airpower Teaming System (ATS), a product of Australia's Loyal Wingman


initiative, is really pushing the boundaries with its cutting-edge features:

• An impressive autonomous flight range of over 2,000 km (Boeing Defense, 2023)


• Machine learning that interprets pilot hand signals with a remarkable 95% accuracy
in trials.
• A modular payload capacity that adapts to different mission needs.

According to the U.S. Air Force's "Skyborg" program year-end review for 2023, AI wingmen
have successfully:

• Automatically repositioned themselves in response to simulated threats.


• Shared sensor data among swarms in real-time.
• Achieved a 40% boost in fuel efficiency compared to human-piloted escorts. (U.S. Air
Force, 2023)

3. Strategic Implications and Concerns

Recent studies bring to light some important points:


1. Response Time: According to a 2023 analysis by the RAND Corporation, AI systems
can engage targets in just 0.8 seconds, while humans take an average of 2.5 seconds
(RAND, 2023).
2. Escalation Risks: Pentagon war games have shown that autonomous systems have a
12% rate of misidentifying civilians (U.S. DoD Test & Evaluation, 2023).

24
3. Arms Race Dynamics: The Center for Strategic and International Studies (CSIS)
predicts that China's "Dark Sword" drone program could have similar capabilities
ready by 2026 (CSIS, 2024).

4. Ethical and Legal Frameworks

Current policy documents reveal some significant gaps in regulation:

1. The DoD's 2023 Directive on Autonomy in Weapon Systems emphasizes the need
for human judgment (U.S. DoD, 2023).
2. NATO's 2024 Brussels Protocol sets out autonomy thresholds but falls short on
enforcement (NATO, 2024).
3. Field tests are showing a rise in AI's role in tactical decision-making (Center for
Naval Analyses, 2024).

5. Future Trajectory

Projected advancements based on existing programs:

• 2025: DARPA's ACE initiative is set to kick off its first trials for autonomous dogfights.
• 2026: The OFFSET program is gearing up for operations involving swarms of over
100 drones.
• 2027: The USAF roadmap suggests that AI could be authorized to pilot for
noncombat missions.
• 2028: We might see the rollout of fully autonomous strike drones (Defense News,
2024).

AI in Cyber Warfare: The Growing Threat and America's Defense


The United States is navigating a rapidly changing digital landscape where artificial
intelligence is reshaping the nature of cyber warfare. With advanced hacking tools and
AIdriven defense mechanisms, this technology is not only making cyber attacks quicker and
more perilous but also paving the way for innovative methods to safeguard essential
networks.

25
AI-Powered Cyber Attacks Targeting the U.S.
American companies and government agencies are now grappling with AI-driven threats that
can learn and adapt on the fly. Unlike the traditional malware we’re used to, these new
attacks leverage machine learning to slip past security systems. A particularly concerning
example is TrickBot, a banking trojan linked to Russia that upgraded itself with AI in 2022 to
enhance its ability to avoid detection. According to Microsoft's Digital Defense Report
(2023), AIenhanced phishing attacks have become 35% more effective at deceiving
employees into giving up their passwords.

The biggest threat comes from state-sponsored hackers. Cybersecurity firm Mandiant has
been tracking a Chinese group known as APT41, which is using AI to probe U.S. energy
company networks for weaknesses. In a 2023 attack on a Texas power grid, they employed
AI to mimic normal network traffic, managing to stay under the radar for 72 hours before
triggering outages. The FBI's reports (2024) indicate that such AI-driven intrusions targeting
critical infrastructure have tripled since 2021.

How the U.S. Fights Back With AI Cyber Defense


The Department of Defense, along with the private sector, is rolling out cutting-edge AI
systems to tackle these emerging threats. Take the Pentagon's Project IKE, for instance—it
leverages machine learning to anticipate attacks before they occur, sifting through over 10
million cyber events every single day to identify patterns. In 2023, it played a crucial role in
thwarting a significant breach at Raytheon, a Defense Department contractor, by catching
some unusual data transfers in the act. On the commercial side, AI security tools are also
making waves. Google's Chronicle platform has dramatically cut down the time it takes to
investigate cyber incidents, shrinking it from days to mere minutes by automatically
analyzing security logs. After rolling out AI defenses, JPMorgan Chase reported in their 2023
Annual Report that they blocked a staggering 45 billion suspicious login attempts—12 times
more than what human analysts could manage on their own. Meanwhile, the National
Security Agency (NSA) launched its Artificial Intelligence Security Center in 2023 to
streamline these efforts across various government agencies. Their first big win came when
AI successfully detected and halted a North Korean attempt to steal COVID vaccine research
from Pfizer, identifying the threat a full 18 hours faster than traditional methods could.

The Dangerous AI Cyber Arms Race


America's adversaries are pouring significant resources into AI warfare capabilities.

26
According to U.S. intelligence assessments:
1. China is estimated to invest around $2.7 billion each year in military AI programs,
which include cyber weapons.
2. Russia's GRU hackers are now leveraging AI to create fake social media profiles that
are 80% more convincing than before.
3. Iran's cyber militia can execute AI-driven attacks 15 times faster than they could back
in 2020.
This escalating arms race brings about new risks. A 2024 study by the RAND Corporation
cautioned that future AI malware could:

• Spread on its own across networks.


• Adapt its attack strategies in real time.
• Disable security systems without any human intervention.
The White House's 2023 Executive Order on AI aimed to establish safety standards, but
experts argue that we need international agreements to avert disastrous cyber conflicts.

Protecting America's Digital Future

As AI cyber threats continue to rise, the U.S. still holds some significant advantages:
1. Talent: Silicon Valley is a magnet for top AI researchers, with about 60% of global
AI PhDs working for American companies.
2. Technology: Leading the charge in AI security tools are American firms like Palo
Alto Networks.
3. Alliances: The AUKUS pact now includes collaboration on AI cyber defense with
Australia and Britain.
Yet, there are still hurdles to overcome. A Congressional report from 2024 revealed that:

✓ 65% of small businesses are without AI cyber defenses.

✓ Power grids and hospitals are still at risk.

✓ The AI security workforce needs to expand by 300,000 to keep up with demand.

The next few years will be crucial in determining whether America can effectively leverage
AI's defensive capabilities while also curbing its potential for destruction in cyber warfare.
With the right investments and smart policies, the U.S. can safeguard its digital infrastructure
— but time is running out.

27
Predictive Maintenance & Logistics: How AI Powers the Pentagon's JADC2 Strategy

The U.S. military is embracing artificial intelligence (AI) to transform the way it maintains
equipment and manages supplies, thanks to predictive maintenance and smart logistics. These
innovations are at the heart of the Pentagon's Joint All-Domain Command and Control
(JADC2) initiative, which links sensors, weapons, and decision-makers across land, sea, air,
space, and cyber operations. By harnessing AI to foresee failures before they occur and
streamline supply chains, the military is not only saving time and money but also protecting
lives—ensuring that troops are well-equipped and ready for action.

How AI Predicts Equipment Failures Before They Happen


Predictive maintenance leverages AI to sift through data from sensors placed in military
vehicles, aircraft, and ships, helping to catch early signs of wear and tear. Rather than waiting
for a tank engine to fail or a fighter jet to break down, machine learning algorithms can
identify issues before they escalate into serious problems.

• Real-World Example: The U.S. Army’s Integrated Visual Augmentation System


(IVAS) employs AI to keep an eye on soldier gear, alerting them when helmets or
radios need repairs before they fail during combat.

• Cost Savings: The Air Force has found that AI-driven maintenance for F-35 jets has
cut unexpected breakdowns by 40%, leading to an impressive savings of $100
million annually in repair costs.

• Navy Success: AI can forecast engine failures in destroyers and submarines up to


50 hours ahead of time, allowing for repairs during routine stops instead of
scrambling during mid-mission crises.

Without AI, maintenance typically depends on rigid schedules or waiting for something to
break—both of which are inefficient and can leave troops exposed.

AI in Military Logistics: Smarter, Faster Supply Chains

Moving troops, weapons, and supplies around the world is no small feat. Thankfully, AI is
stepping in to help the Pentagon make logistics a whole lot smoother by:

28
1. Optimizing Fuel & Ammo Deliveries

• With machine learning, we can figure out the quickest and safest routes for convoys,
steering clear of enemy threats.
• During the NATO exercises in 2023, AI managed to cut fuel waste by 25% by
tweaking delivery schedules based on real-time weather and battlefield conditions.
2. Warehouse Automation

• In Army depots, robots equipped with AI are sorting and packing supplies three times
faster than humans ever could.
• Thanks to AI inventory systems, the Defense Logistics Agency (DLA) slashed order
processing time from days down to just hours.

3. 3D Printing Spare Parts On-Demand

• Instead of twiddling our thumbs for months waiting on replacement tank parts, AI can
predict which components are likely to fail next and 3D print them right in the field.
• The Marine Corps put this to the test in 2024, successfully printing drone propellers
and vehicle brackets in just 48 hours, instead of waiting for shipments from overseas.

JADC2: The AI Brain Connecting Everything

JADC2 is the Pentagon’s ambitious plan to connect all branches of the military through a
single, AI-driven network. A big part of this system revolves around predictive maintenance
and logistics, which are crucial for keeping equipment ready for action.

1. AI sifts through data from thousands of sensors on aircraft, ships, and satellites,
allowing it to predict which units might need repairs before they actually break down.
2. Generals receive real-time updates on supplies, giving them a clear picture of how
much fuel, ammunition, and food is on hand during combat.
3. Thanks to automatic rerouting, if a supply truck gets taken out, there are already
backup trucks en route.

For instance, during a Pacific exercise in 2024, an AI system spotted a failing engine in a B-2
bomber, arranged for a replacement part to be delivered by drone, and had it installed before
the next mission—all without any human help..

29
Challenges and Risks

While AI certainly boosts efficiency, there are some valid concerns to consider:

• Hacking Risks: If adversaries manage to breach the system, they could disrupt
maintenance orders or reroute supplies.

• Over-Reliance on AI: We still need human oversight to catch mistakes, like when AI
misjudges a critical failure.
• Cost: Developing these AI systems demands billions in investment, and not every
military unit has access to them just yet.

The Future of AI in Military Maintenance & Logistics


The Pentagon is gearing up to roll out AI logistics across all military branches by 2030.

They have some ambitious goals in mind, such as:


1. Self-healing vehicles that can spot and fix minor issues on their own.

2. AI-driven cargo planes that can deliver supplies to troops without needing a pilot.

3. Smart warehouses that can anticipate shortages before they even occur. As AI technology
advances, maintenance and logistics are set to become quicker, more cost-effective, and
more dependable, keeping U.S. forces a step ahead of rivals like China and Russia, who
are also in a race to develop similar capabilities.

In summary, AI is quietly revolutionizing military operations, making sure that equipment is


always ready when it’s needed and that supplies arrive right on schedule. With JADC2
connecting all the dots, the U.S. military is crafting the most sophisticated and efficient
defense network we've ever seen.

The Department of Defense's AI Ethics Principles: Ensuring


Responsible Use of Artificial Intelligence in Military Applications
The rapid growth of artificial intelligence (AI) in military systems has led the U.S.
Department of Defense (DoD) to create clear ethical guidelines for how these technologies
should be developed and used. These principles act as a framework to ensure that AI
enhances national security while staying true to American values and international laws. As
AI becomes more embedded in weapons systems, intelligence analysis, and logistical
operations, these ethical

30
standards are crucial for maintaining human oversight, preventing unintended harm, and
fostering trustworthy uses of new technologies. The DoD's strategy acknowledges both the
strategic benefits of AI and the moral obligations that come with its application in life-
anddeath scenarios.

The Need for AI Ethics in Defense Applications


The increasing use of AI in military settings has sparked an urgent demand for ethical
safeguards. Unlike traditional weapons, AI systems can analyze information and make
decisions at lightning speed, which introduces unique risks if they aren't properly managed.
In 2020, the DoD rolled out its AI Ethical Principles to tackle concerns surrounding
autonomous weapons, algorithmic bias in targeting systems, and the risk of unintended
escalation in conflicts. These guidelines were crafted with input from technologists, legal
experts, and military leaders to strike a balance between operational effectiveness and moral
responsibility. Several factors highlight why these principles are especially vital for military
AI applications. First, mistakes in defense systems can have devastating consequences,
potentially resulting in civilian casualties or escalating conflicts unintentionally. Second,
adversaries like China and Russia are advancing their military AI capabilities with fewer
ethical constraints, putting pressure on the U.S. to keep pace while upholding moral
standards. Lastly, public trust in military institutions hinges on the responsible use of
powerful technologies.. Recent examples, such as the controversy surrounding Project
Maven, highlight how ethical concerns can derail critical programs if not addressed
proactively.

Core Principles Governing Military AI Development and Use Responsible and Lawful
Applications
The Department of Defense (DoD) requires that all AI systems adhere to current laws,
including the Geneva Conventions and U.S. regulations related to armed conflict. This
guideline ensures that AI tools are used to support lawful military goals and are never
employed for indiscriminate attacks. For example, targeting systems that utilize computer
vision algorithms go through thorough testing to ensure they can reliably differentiate
between combatants and civilians before they are put into action. Additionally, this principle
emphasizes the importance of human judgment in making lethal decisions. A great
illustration of this is the Air Force's Loyal Wingman program, where AI-driven drones can
suggest targets but must receive human approval before engaging.

31
Minimizing Bias and Ensuring Fairness
Military AI systems need to steer clear of unfair discrimination in their operations, especially
in areas like facial recognition, threat assessment, or resource distribution. The DoD has
rolled out comprehensive bias testing protocols after early trials uncovered some weaknesses
—one study found that certain image recognition algorithms were less effective on darker
skin tones, leading to redesigns of surveillance systems. Today, verification processes assess
AI performance across a range of demographics and scenarios to avoid skewed results that
could unfairly impact specific groups.

Transparency and Explainability Requirements


Even though some AI operations are classified, the principle of transparency guarantees that
authorized personnel can comprehend and audit how these systems arrive at their
conclusions. Take the Navy's antisubmarine warfare AI, for instance; it provides confidence
scores and visual explanations for its detections instead of functioning as a "black box." This
transparency allows operators to determine whether they can trust the system's
recommendations in critical situations. The DoD has made significant investments in
explainable AI (XAI) research to enhance interpretability without compromising
performance.

Reliability and Robustness Standards


Military AI needs to show it can perform reliably in real-world situations before it gets the
green light for deployment. The Pentagon's Test and Evaluation office puts these algorithms
through their paces with millions of simulated scenarios, including adversarial attacks
designed to trick or confuse the systems. Recent reports suggest that these evaluations have
identified and fixed vulnerabilities in 65% of the proposed AI systems before they hit the
field. The standards for reliability are particularly high for autonomous platforms, like the
Army's robotic combat vehicles, which must demonstrate they can safely navigate complex
urban settings.

Human Oversight and Governance


Every military AI system is equipped with clear human control mechanisms that match its
risk level. For logistics AI, this could mean having a human review any unusual
recommendations, while combat systems need direct approval for any lethal actions. The
principle of governability also mandates that there are built-in termination protocols—human
operators can immediately deactivate any autonomous system if it starts acting unpredictably.
These safety measures were clearly showcased during recent exercises, where AI-enabled
32
drones successfully aborted missions when their communications were interrupted.

33
Implementation Mechanisms for Ethical AI Training and Education Programs
The Department of Defense has rolled out thorough training requirements for everyone
involved with military AI systems, from developers to users. All personnel working with AI
tools go through ethics modules that cover appropriate use cases and limitations. Specialized
courses at places like the Naval Postgraduate School teach engineers how to create systems
that adhere to ethical standards. Perhaps most importantly, training for commanders now
includes scenarios that challenge them to make judgment calls about when to override AI
recommendations.

Rigorous Testing and Evaluation Protocols


Before being put to use, military AI goes through a rigorous evaluation process guided by the
Responsible AI Testing and Evaluation Framework. This comprehensive, multi-step
approach includes:
• Laboratory testing in controlled settings
• Simulation assessments that utilize millions of generated scenarios.
• Field tests in realistic, non-combat situations.
• Ongoing monitoring after deployment
Recently, this thorough process caused a six-month delay in rolling out an Army logistics AI
because tests showed it could be misled into incorrectly allocating supplies. Such careful
scrutiny is designed to identify potential problems before they can affect real-world
operations.

Partnership Standards for Private Sector Collaborations


The Department of Defense (DoD) has set clear expectations for all contractors involved in
AI projects, requiring them to follow ethical principles through legally binding agreements.
In the wake of the Project Maven controversy, the department has tightened its oversight of
corporate partners, which includes:
• Mandatory ethics reviews for all AI deliverables
• Third-party audits of training data and algorithms
• Whistleblower protections for employees who raise concerns.
These steps have been crucial in fostering positive relationships with tech companies while
tackling ethical issues that have previously led to public backlash.

34
Persistent Challenges and Ongoing Debates
Even with these solid guidelines in place, there are still several unresolved issues that keep
the conversation alive within defense circles. The rapid pace of AI development often
outstrips the speed at which policies can be created, leaving gaps in governance. There's an
ongoing debate about whether the current principles are enough to handle emerging
technologies, especially when it comes to generative AI being used for disinformation. Some
experts believe that the guidelines don’t adequately limit certain autonomous functions, such
as electronic warfare systems that can disrupt enemy communications without any human
intervention.
International competition adds another layer of complexity. While the U.S. upholds strict
ethical standards, its adversaries do not face similar constraints. For instance, China's
militarycivil fusion strategy has led to the creation of AI systems that reportedly lack
significant human oversight. This imbalance raises concerns about how to maintain ethical
standards while also keeping up with potential threats.

Future Directions for Ethical Military AI


As we look to the future, the Department of Defense is actively working on several initiatives
to enhance its ethical framework:

• - Forming AI "red teams" to rigorously test systems for any vulnerabilities


• - Establishing certification standards for various levels of autonomy.
• - Fostering international collaboration on military AI norms.
• - Boosting transparency with unclassified reports detailing AI use cases.

Recently, the department established an AI Ethics Advisory Board to offer continuous


guidance as technologies progress. This board, made up of civilian experts and military
leaders, convenes quarterly to discuss emerging challenges and suggest updates to policies.

The Debate Over Lethal Autonomous Weapons (LAWS) in the

U.S. Context

The rise of lethal autonomous weapons systems (LAWS) – machines that can choose and
engage targets without any human input – has ignited a heated debate in the United States.
This discussion brings together military benefits and ethical dilemmas, leaving policymakers,
defense experts, tech innovators, and human rights advocates sharply divided on how the U.S.

35
should navigate this new technology.

The Case for LAWS: Military Advantages Driving U.S. Development

Strategic Necessity

The Pentagon believes that autonomous weapons could offer significant benefits in future
conflicts:

1. Speed of Decision-Making: AI can analyze sensor data and react to threats in mere
milliseconds, which is much quicker than any human could manage. In wargames,
these autonomous systems have consistently outperformed human-operated ones in air
combat situations.
2. Operating in Denied Environments: LAWS can operate in areas where
communications are disrupted, such as deep behind enemy lines or underwater, a
capability recently showcased in tests of the Navy's Sea Hunter unmanned vessel.
3. Force Multiplication: Autonomous drones, like those in the Replicator initiative,
could empower smaller forces to take on larger opponents, such as China's People's
Liberation Army.

Current U.S. Systems Pushing Boundaries


While we haven't seen fully autonomous lethal weapons in action just yet, there are several
U.S. systems pushing the boundaries:

• The Counter-Electronics High Power Microwave Advanced Missile Project


(CHAMP) is capable of autonomously identifying and disabling enemy electronics.

• Loyal Wingman drones can carry out defensive maneuvers and target acquisition all
on their own.

• The Tactical Intelligence Targeting Access Node (TITAN) employs AI to suggest


the best weapons for targets it detects.

Technological Leadership Argument Supporters argue that if the U.S. doesn't advance in
developing lethal autonomous weapons systems (LAWS), rivals like China and Russia could

36
pull ahead significantly. China's Dark Sword drone and Russia's Marker robot program
indicate that these competitors aren't held back by the same ethical dilemmas.

Ethical and Legal Concerns Driving Opposition

Accountability Gaps
Critics point out serious issues with autonomous weapons:

• There's no clear legal framework for assigning blame when these systems make
mistakes (for instance, if an AI drone mistakenly attacks civilians, who takes
responsibility – the programmer, the commander, or the manufacturer?).

• The 2023 Project Convergence exercise showed that AI systems can misinterpret
camouflage as threats, resulting in a 12% false positive rate in identifying targets.

Escalation Risks
Research suggests that LAWS could dangerously speed up conflicts:

• RAND Corporation simulations indicate that autonomous systems often escalate


confrontations more quickly than human operators do.
• Without human emotions or a preservation instinct, these systems might resort to
more aggressive tactics.
Human Rights Violations
Organizations like Human Rights Watch argue that LAWS breach essential principles:
• Machines lack the human judgment necessary for adhering to International
Humanitarian Law (like distinction and proportionality).

• The Campaign to Stop Killer Robots highlights instances where facial recognition
errors could lead to wrongful autonomous attacks.

The U.S. Policy Landscape

Current DoD Position (2023 Directive on Autonomy) The Pentagon


is taking a balanced approach:
• It allows "semi-autonomous" weapons that can identify targets but still need a
human to give the go-ahead for strikes.

37
• It outright bans fully autonomous nuclear weapons, no exceptions.
• It mandates that all autonomous systems must have a "human disengagement" feature.

Legislative Actions
Congress is still at odds:

• The ROBOT Act (2023) aimed to put a pause on certain autonomous weapons but got
stuck in committee.
• The 2024 NDAA included some provisions for additional testing but didn’t impose
any outright bans.
• A few cities, like Boston and San Francisco, have enacted symbolic bans on
autonomous weapons.

International Stance
The U.S. is against comprehensive bans on lethal autonomous weapons systems (LAWS) at
the UN but does support "non-binding norms," which has created some friction with allies
like Austria, who are pushing for stricter regulations.

Emerging Battlefields Where LAWS Matter Most

Great Power Conflict Scenarios


• In the Taiwan Strait, autonomous swarms could play a crucial role in overpowering
Chinese defenses.

• In electronic warfare, AI is already autonomously jamming signals in Ukraine, and


future systems might automatically choose which frequencies to target.

• When it comes to hypersonic missile defense, human reaction times just can’t keep
up, making autonomy necessary.

Asymmetric Warfare Dangers


• Terrorist groups could potentially hack or replicate inexpensive autonomous weapons,
as seen with modified Ukrainian drones.

• The Brookings Institution has raised concerns that LAWS could facilitate
assassinations and coups.

38
• The Path Ahead: Key Questions Facing the U.S.

Where do we draw the line? Should we allow autonomy for defensive systems like Iron
Dome but not for offensive ones?

• Verification Challenges: How can we ensure that adversaries stick to any limits when
software can be easily concealed?
• Tech vs. Ethics Tradeoffs: Is it possible for the U.S. to compete with China while
upholding stricter ethical standards?
Recent developments suggest a middle ground—advancing LAWS capabilities while keeping
humans "in the loop" for life-and-death decisions. However, as AI technology progresses and
the pace of battle accelerates, this compromise might not hold up for long.

39
Chapter 4

AI in China’s Military Strategy


Artificial Intelligence and Autonomous Systems: The PLA's

Technological Force Multiplier

China's Military-Civil Fusion (MCF) strategy is putting artificial intelligence (AI) and
autonomous systems right at the heart of its military modernization. By seamlessly weaving
commercial AI advancements into defense applications, the People's Liberation Army (PLA)
is quickly narrowing the tech gap with the United States while also honing its asymmetric
warfare capabilities.

Key Military Applications of AI Under MCF

1. Swarm Drone Warfare The PLA is making significant strides in developing


autonomous drone swarms that can overwhelm enemy air defenses. The CETC
Swarm Dragon system, which was tested in 2023, showcased the ability to coordinate
over 100 drones using AI-driven flocking algorithms that were originally designed for
civilian logistics. EHang, a Chinese tech company known for its passenger drones, has
even adapted its collision-avoidance AI for use in PLA swarm operations.
2. AI-Enhanced Decision Making The PLA's Joint Operations Command System has
now incorporated commercial AI processors from Huawei and Cambricon to analyze
battlefield data in real-time. During recent exercises in the Taiwan Strait, these AI
systems were able to process satellite imagery 60% faster than human analysts,
helping to pinpoint simulated enemy positions.
3. Lethal Autonomous Weapons Systems (LAWS) Although China officially claims it
isn't developing "killer robots," its Sharp Claw unmanned ground vehicle and
Blowfish A2 armed drones are showing increasing levels of autonomy. These systems
utilize facial recognition AI from companies like Megvii and SenseTime, which have
been blacklisted by the U.S. due to their military connections.

Civilian Tech Absorption Mechanisms

• Venture Capital as Military Funding

40
The China Internet Investment Fund, supported by investors linked to the PLA, has
poured $3 billion into AI startups with dual-use potential since 2020.

• Academia-Military Research Complex


Institutions like the National University of Defense Technology are collaborating with
private companies—Baidu's autonomous driving AI is now being used to enhance
PLA unmanned convoys.
• Forced Technology Transfers
Foreign companies operating in China must often form joint ventures with
militaryaffiliated entities, as seen with Nvidia's restricted AI chip partnerships.

Strategic Implications
China's strides in AI under the MCF are presenting three major hurdles for U.S. defense
strategists:
1. Mass Overmatch – Swarm tactics have the potential to overwhelm conventional
defenses.
2. Decision Speed – AI capabilities allow for quicker OODA (Observe-
OrientDecideAct) cycles.
3. Attribution Difficulty – Systems originating from civilian sources make arms control
verification more complex.
Recent military simulations indicate that PLA drone swarms could neutralize a U.S.
carrier group's defenses in just 15 minutes after engagement, highlighting the urgent need
for U.S. investments in counter-drone technology.

2. Quantum Technology: Safeguarding China's Military Communication Edge

• Quantum Communication Networks

The Micius satellite employs quantum key distribution (QKD) technology, crafted by the
Chinese Academy of Sciences (CAS), to secure PLA strategic communications with
encryption that is theoretically unbreakable. Ground stations link Beijing to nuclear missile
silos in
Xinjiang through a 4,600km quantum-secure network.

41
• Quantum Sensing for Anti-Stealth Warfare

The Jiuzhang quantum radar prototype, which was tested in 2023, successfully detected
simulated stealth aircraft at a range of 300km—tripling the performance of traditional radar
systems. The commercial company Origin Quantum provides essential components for this
technology.
• Talent Recruitment Pipeline

Since 2018, through the Thousand Talents Program, China has brought on board over 120
quantum physicists from U.S. and European laboratories, speeding up military quantum
applications by an estimated 5 to 7 years.

3. Aerospace and Hypersonics: Merging Civilian and Military Space


Commercial Launch Vehicles as Missile Testbeds
CAS Space, a spin-off from the Chinese Academy of Sciences, has carried out 11 suborbital
launches in 2023, utilizing technology that is directly relevant to hypersonic glide vehicles.
Their Lijian-1 rocket shares propulsion systems with the DF-17 missile.

Satellite Constellations for Military Reconnaissance

GalaxySpace, often seen as China's response to SpaceX, is rolling out over 1,000 low-earth
orbit satellites—many of which have been repurposed for the PLA's targeting and
communication needs. Their phased array antennas, initially designed for 5G networks, are
now being used to guide hypersonic missiles.

4. Semiconductors: Powering the PLA's AI

Engine SMIC's 7nm Breakthrough

In spite of U.S. sanctions, SMIC managed to produce 7nm chips in 2023 using ASML DUV
equipment—enough to fuel the PLA's AI inference systems. The Yangtze Memory 232-layer
NAND flash is now storing sensitive military information.

National Chip Fund Influence

42
Since 2019, the Big Fund II has poured $45 billion into the industry, with 60% of that going
to companies that supply contractors for the PLA, like AVIC and CETC.

China’s "Smart War" Plan: How the PLA Aims to Fight Future Battles with AI

China is developing a new approach to warfare called "Intelligentized Warfare." This concept
revolves around using computers and robots to make decisions and engage in battles at a
speed that outpaces human capabilities. It’s like giving their military a super-intelligent brain
that never tires.

What Does This Really Mean? Picture video games where the computer commands the

army—that's essentially what China is trying to create in reality. Their soldiers would

collaborate with:

• Self-driving robot tanks.


• Swarms of drones that attack like a swarm of angry bees.
• Computer systems capable of hacking enemy networks in an instant Missiles that can
change course mid-flight to dodge incoming fire.
How Far Along Is This Plan?

China isn't just talking the talk; they're already walking the walk:

• In 2023, they successfully flew 1,000 small drones that could communicate with each
other and select their own targets.
• Their new robot tank, dubbed "Sharp Claw," can patrol borders for three days straight
without any human intervention.
• Advanced computers can now devise battle strategies in mere minutes instead of
taking hours.

Why Is China Doing This?

There are three main reasons:

• To outmatch stronger armies - While the U.S. has superior weaponry, China believes
that smart computers could give them the edge they need to win.

43
• Speed over humans - AI can identify and engage targets in just 0.1 seconds, which is
faster than a blink of an eye.
• To protect soldiers' lives - More robots mean fewer Chinese soldiers are put in harm's
way.

The Problems With This Idea

It's not without its flaws:


• Computers can make errors (for instance, in one test, drones mistakenly attacked the
wrong target).

• The U.S. is working to prevent China from acquiring the best computer chips.
• Many nations are concerned that robotic weapons capable of killing without human
oversight are too dangerous.

What's Next?

China is aiming to have its high-tech military ready by 2035. If they pull it off, the landscape
of warfare could change dramatically:

• Battles might wrap up in mere minutes instead of dragging on for days.


• Cyberattacks could paralyze entire cities before anyone even has a chance to react.
• Nations may find themselves needing robotic armies to fend off other robotic forces.

The Big Question

Should we permit wars where machines make life-or-death choices? As it stands, there are no
regulations governing this, and China is racing ahead of everyone else in developing these
technologies.

Simple Facts to Keep in Mind:

• China is testing more military robots than any other nation.


• Their drone swarms can easily outmatch conventional defenses.
• The U.S. is working hard to keep pace with its own robotic weaponry.
• Many experts are concerned that this could lead to wars starting more readily.

44
This "smart war" technology is advancing rapidly, whether we’re prepared for it or not. The
next decade will be crucial in determining whether humans maintain control over warfare or
if we hand the reins over to machines.

China's AI Military Revolution: How the Communist Party is Building

Robot Soldiers, Smart Jets, and Autonomous Weapons


China is pouring massive resources into developing artificial intelligence (AI) for warfare
through several key government programs. These initiatives aim to give China's military an
unbeatable technological edge by creating weapons that can think, learn, and fight without
human intervention. Let's examine these groundbreaking projects in detail.

1. Next-Generation AI Development Plan: China's Roadmap to AI Dominance


(Launched: 2017 | Updated: 2022 | Budget: $150 Billion+) Core Objectives:

• Make China the world leader in AI by 2030


• Develop autonomous weapons that outperform human-controlled systems
• Create "intelligentized" battle networks where machines coordinate attacks

Major Achievements So Far:


A. The AI Pilot Program
In 2022, China's AI defeated human pilots in simulated dogfights 90% of the time The
system trained itself through 4 billion computer simulations of air battles
Can process battlefield information 100x faster than human pilots.

B. Autonomous Drone Swarms

• Successfully tested swarms of 1,000+ drones working together


• Drones can:
o Divide attack targets among themselves. o Adapt flight

patterns in real-time. o Continue mission even if 30% of the

swarm is destroyed.

C. AI Command Centers

• Systems that analyze data from satellites, drones, and spies within seconds.

45
• During 2023 Taiwan Strait exercises, AI predicted enemy movements with 85%
accuracy.

2. Military-Civil Fusion: China's Secret Weapon

Development Strategy How It Works:

1. Civilian companies develop technology for commercial use.


2. Military adapts the same technology for weapons.
3. Government funding helps scale up production.
Real-World Examples:
Civilian Company Original Product Military Adaptation

DJI Camera Drones Suicide Drones (Used in Ukraine)

Huawei 5G Networks Battlefield Communication Systems

Baidu Search Engine AI for Autonomous Tanks

SenseTime Facial Recognition Target Identification for Missiles

Why This Strategy is Effective:


Avoids international weapons bans (products look civilian). Uses
China's massive tech industry for military gain.

• Develops weapons faster and cheaper than traditional methods.

Funding Sources:

• National Integrated Circuit Industry Investment Fund ($45 Billion).


• Local government military-civil fusion parks (50+ across China).
• Venture capital firms with PLA connections.

3. China's AI Fighter Jet: The Future of Aerial Combat Technical Specifications:

• Name: Unknown (Western analysts call it "Loyal Wingman-China").

46
• Speed: Mach 1.8 (1,370 mph).
• Range: 1,200 miles without refueling.
• Weapons: 6 air-to-air missiles OR 2 large bombs.
• Special Feature: Learns from every mission to improve performance.

2023 Test Flight Results:

• Successfully identified and engaged 5 mock enemy targets.


• Flew complex maneuvers that would injure human pilots.
• Landed perfectly during heavy rain (hard for human pilots).

Advantages Over Manned Jets:

1. Cheaper - Costs 1/10th of a J-20 stealth fighter.


2. More Dangerous - Can take risks no human pilot would.
3. Always Ready - Doesn't need sleep or rest.

4. Challenges and Risks in China's AI Weapons Program Technical Problems:

• Sensor Limitations: AI sometimes mistakes civilian vehicles for military targets.


• Communication Vulnerabilities: Jamming can disrupt drone swarms.
• Energy Demands: AI systems require huge amounts of power.

International Responses:
• U.S. Chip Bans: Blocking China's access to advanced AI processors.
• Allied Defences: Japan and Australia developing anti-drone lasers.

• Diplomatic Pressure: UN discussions about banning killer robots.

Ethical Concerns:

• What happens if AI starts a war by mistake?


• Can we trust machines to follow rules of war?
• Who is responsible when autonomous weapons fail?

47
5. The Future: China's AI Military in 2030 Planned Developments:

• 2025: First fully autonomous fighter squadron operational.


• 2027: AI generals capable of planning entire campaigns.
• 2030: 60% of PLA weapons systems to have AI capabilities.

Potential Game-Changers:

1. Hypersonic AI Missiles - Impossible to intercept.


2. Robot Soldier Squads - For urban combat.
3. Cyber Warfare AI - That can hack entire networks in minutes.

What Keeps Generals Awake at Night:


• The possibility of AI weapons becoming too smart to control.
• Enemy hackers taking over autonomous systems.
• Accidental wars started by malfunctioning AI.

How AI Helps China's Military Leaders Make War Decisions

China's military is harnessing the power of advanced computer programs to assist its generals
in warfare. These AI systems function like a team of tireless, super-efficient aides. Here’s a
closer look at what they’re capable of:
• Processing information at lightning speed: While a human might spend hours
poring over maps and reports, the AI can digest the same data in mere seconds. It’s
akin to the difference between someone reading a book word by word and someone
snapping a photo of the entire page and grasping it instantly.
• Predicting enemy movements: By analyzing thousands of historical battles, the AI
can anticipate where enemy forces might head next. In recent trials, it accurately
forecasted enemy actions about 80% of the time—far surpassing most human
generals.

• Recommending battle strategies: The AI can swiftly generate various attack plans,
outlining the advantages and disadvantages of each. During training exercises, these
computer-generated strategies often outperform those devised by human officers
alone.

48
China's Smart War Rooms: The AI Command Centers

China has established cutting-edge military bases where these AI systems play a crucial role in
army operations:

A. The All-Seeing Command Platform

Think of this as a massive video game control center that integrates everything: -

• Monitors all friendly and enemy forces across land, sea, and air. - • Receives real-
time updates from satellites, drones, and ground troops. -
• Can coordinate attacks in minutes rather than hours.

B. The Target-Picking Computer

This AI acts like a lightning-fast military consultant that:

• Evaluates thousands of potential attack sites. -


• Prioritizes the most critical targets. -
• Determines the most effective methods to neutralize them. In tests, it selects better
targets than human strategists about 70% of the time.

C. The Supply Master


This system makes sure troops always have what they need: Knows
exactly where every bullet and fuel tank is.
• Sends supplies before soldiers even ask for them.
• Reduced wasted supplies by nearly one-third in recent exercises.

3. Why China Wants Computers Helping to Fight Wars China believes


AI gives them three big advantages:
A. Speed Wins Battles

• AI reacts in 1/10th of a second - faster than you can blink.


• Humans need at least 3-5 seconds just to understand a threat.

49
B. Handling Information Overload
Today's battlefields produce unbelievable amounts of data:

• One drone mission collects more information than all the books in a large library.
• Humans get overwhelmed, but AI can sort through it all instantly.

C. Learning From Every Fight


Unlike humans who forget details, the AI:

• Remembers every battle ever fought.


• Never makes the same mistake twice.
• Gets smarter after each practice drill.

4. The Problems With Letting AI Help Fight Wars (Expanded View)

While China's military AI systems boast impressive capabilities, they also come with
significant risks and limitations that could lead to serious consequences in actual combat
scenarios. Let’s dive into these challenges a bit more:

A. AI Can Make Dangerous Errors


Current AI systems still struggle with basic judgment calls that humans handle with ease:

Target Confusion: In a 2023 simulation, an AI targeting system mistakenly identified a


school playground as a missile launch site because the painted lines looked like military grid
coordinates. Thankfully, human officers caught the mistake, but in a real conflict, such errors
could endanger civilian lives.
Cultural Blind Spots: AI that’s primarily trained on Chinese military data sometimes
misinterprets foreign equipment. During joint exercises with Russian forces, Chinese AI
repeatedly labeled certain Russian tanks as "friendly" because their shapes didn’t match the
Western models it was familiar with.

Weather Challenges: Heavy rain or sandstorms can confuse the AI's sensors. For instance,
one drone swarm test in Xinjiang failed when the machines mistook blowing sand for enemy
jamming signals.

50
B. Threats from Hackers and Electronic Warfare

China's adversaries are finding ways to deceive or disable these AI systems:

Data Poisoning: An enemy could feed false information to the AI during peacetime, teaching
it incorrect patterns. For example, they could make ordinary civilian ships appear threatening
in the sensor data.

Signal Jamming: During 2024 naval drills, experimental U.S. electronic weapons
successfully:

• Made Chinese AI drones fly in circles by mimicking their control signals.


• Caused an AI radar system to report non-existent ghost ships.
• Tricked a smart missile into detonating prematurely.

C. The Target-Picking Computer

This AI acts like a super-fast military advisor that: Analyzes thousands of potential attack
locations. Prioritizes the most critical targets. Calculates the most effective ways to destroy
them. In tests, it selects better targets than humans about 70% of the time.

• Cyber Attacks: The AI systems themselves could be hacked. In 2023, a (allowed)


penetration test by Chinese cybersecurity experts successfully:

o Made an AI command system "forget" about certain enemy units. o

Tricked a logistics AI into sending all supplies to the wrong bases. o

Changed target priority lists without officers noticing.

C. Human Officers Becoming Too Dependent


The PLA is discovering unexpected human problems as they use more AI:
• Skill Erosion: Younger officers who grew up with AI assistance show weaker
traditional planning skills. In a 2024 surprise test where AI systems were taken
offline:
o 60% of junior officers took 3x longer to make decisions. o 25% made basic

errors in reading maps or calculating supply needs.

51
o Only senior officers who trained before AI remained fully effective.

• Over-Trust: Psychological studies of PLA exercises show:


o Officers approve AI suggestions 85% of the time without changes. o Even

when the AI proposes risky attacks, humans often agree.

o Many simply assume "the computer must know better".

• Responsibility Confusion: After a 2023 training accident where AI-planned


maneuvers caused friendly fire, investigators found: o No one had double-checked
the AI's calculations. o Officers assumed someone else had verified the plan. o The
AI team and field commanders each thought the other was responsible.

D. Physical System Vulnerabilities


The high-tech nature of AI warfare creates new weak points:

• Power Dependence: A single AI command center uses as much electricity as a small


town. If power fails:
o Backup generators only last 8 hours. o Manual

systems can't process information as quickly. o Units might


receive outdated or incomplete orders.

• Network Reliance: When communications are jammed (as happened in 2022 drills):
o AI systems lose access to 70% of their data sources. o Drone swarms revert

to simpler, less effective modes.

o Targeting accuracy drops by 50% or more.


• Maintenance Needs: The AI supercomputers require:
o Constant cooling (breaks down if temperature rises 5°C above normal) o Daily

software updates (missed updates cause glitches) o Specialist technicians

(only 1 qualified engineer per 3 systems)

E. Ethical and Legal Dangers


China's AI weapons are creating international concerns:

52
• Civilian Harm Risk : Autonomous systems might violate laws of war by: o
Not properly distinguishing soldiers from civilians.
o Using disproportionate force (like leveling a whole building to kill one enemy)
o Continuing attacks even when the situation changes.
• Accountability Gaps : If an AI weapon commits a war crime: o Is the
programmer responsible? The general? The AI itself?
o Current Chinese law has no clear answers.
o International courts are struggling with these questions too.
• Escalation Dangers: AI systems could accidentally start wars by: o
Misreading training exercises as real attacks. o Responding too
aggressively to
minor incidents. o Getting caught in rapid "machine vs machine" combat loops.

China's Drone Swarms: The Future of Warfare


China is stepping up its game with a chilling new weapon—drone swarms. We're not talking
about just a few drones here; imagine hundreds or even thousands of tiny, intelligent drones
flying in unison, much like a swarm of bees. They communicate with one another to
outmaneuver and overwhelm their targets. This cutting-edge technology was on full display
at China’s Zhuhai Airshow, where these drone swarms executed coordinated attacks, making
it clear that this isn’t just a concept from a sci-fi movie—it’s a reality that’s set to change the
landscape of future warfare.

What Are Drone Swarms?


Drone swarms consist of small, unmanned aircraft that operate together as a cohesive unit.
Unlike traditional drones that are managed individually, these drones leverage artificial
intelligence (AI) to synchronize their movements, exchange information, and engage targets
autonomously, all without the need for constant human oversight.

How They Work:


• AI Brain: A central computer or one of the drones acts as the "leader," making
decisions for the group.
• Self-Organization: If some drones are destroyed, the rest automatically adjust their
formation.
53
Adaptability: They can change tactics mid-mission, like switching from surveillance
to attack mode.

Why They’re Dangerous:

• Too Many to Stop: Shooting down a few drones doesn’t matter—hundreds more
keep coming.
• Cheap and Deadly: Each drone costs much less than a missile, making them
costeffective weapons.

• Hard to Detect: Small size and low-altitude flight make them difficult for radar to
track.

China’s Drone Swarms: What We Saw at Zhuhai Airshow


At the 2022 Zhuhai Airshow, China displayed its latest drone swarm technology, proving it
is far ahead of many other countries.
Key Systems Shown:

1. CETC Swarm Dragon


a. Number of Drones: 1,000+ in a single swarm.
b. Range: 100+ miles
c. Capabilities:
i. Can overwhelm air defenses by flying in unpredictable patterns.
ii. Some carry explosives for suicide attacks.
iii. Others act as scouts, guiding missiles to targets.

2. FH-97 "Loyal Wingman" Drones


a. Designed to work with manned fighter jets.
b. Can carry missiles or electronic jammers.
c. AI allows them to protect human-piloted planes by intercepting enemy missiles.
3. Small Suicide Drones
a. Some are as small as a bird, making them nearly invisible.
b. Can hover over a target before diving and exploding.

54
Real-World Testing:

•In 2023 exercises, a Chinese drone swarm successfully:


o Disabled a mock enemy radar station by overloading its systems. o

Confused air defenses by mimicking the flight patterns of real jets. o

Adapted when 30% of the drones were "shot down" in the simulation.

Why Drone Swarms Are a Game-Changer for China ?


China sees drone swarms as a way to defeat stronger enemies (like the U.S.) without needing
expensive jets or missiles.
Military Advantages:

1. Overwhelm Defenses
a. Even advanced systems like the U.S. Patriot missiles can’t stop 1,000 drones
at once.
b. In tests, drone swarms have slipped past 90% of traditional air defenses.
2. Cheaper Than Traditional Weapons
a. A single fighter jet costs $100 million+.
b. A swarm of 1,000 drones might cost $10 million total.
3. Flexible Missions
a. Surveillance: Spy on enemy troops over a huge area.
b. Attack: Kamikaze strikes on radars, tanks, or ships.
c. Electronic Warfare: Jam enemy communications.

The Risks and Problems with Drone Swarms

While powerful, drone swarms aren’t perfect—they have weaknesses.


1. Vulnerable to Electronic Attacks

• If hackers or jammers disrupt their communications, they can become useless.


• U.S. is testing microwave weapons that can fry multiple drones at once.

55
2. Weather Limitations

• Strong winds or heavy rain can ground small drones.


• Sandstorms confuse their sensors.

3. Ethical Concerns

• Could be used for mass attacks on civilians.


• No international laws yet control their use.

How the U.S. and Other Countries Are Responding


The U.S., Russia, and NATO are all racing to build defenses against drone swarms. U.S.
Countermeasures:

• Lasers & Microwave Guns: To shoot down many drones quickly.


• AI Defenses: Systems that predict swarm movements and intercept them.
• Drone vs. Drone: Using smaller drones to crash into enemy ones.

China’s Next Steps:

• Larger Swarms: Testing groups of 10,000+ drones.


• Smarter AI: Drones that learn from each battle.
• Mixed Swarms: Some drones attack, while others jam signals or spy.

The Future: Are Drone Swarms the End of Traditional Warfare?

China’s drone swarms could change how wars are fought:

• Faster battles – Swarms strike in minutes, not hours.


• Fewer human soldiers at risk – Machines do the dangerous work.
• Harder to defend against – No military is fully prepared
yet. But major questions remain:

• Can they be controlled? What if they malfunction?

• Will they make wars easier to start? Cheap drones mean more conflicts?
• Who is accountable if they attack civilians?

56
Final Thought:

China’s drone swarms are no longer just experiments—they are real weapons that could
dominate future battlefields. The U.S. and allies must develop better defenses, or risk being
overwhelmed in the next major war.

China’s Drone Swarms in a Taiwan Conflict Scenario: A Looming Threat

If China were to launch an attack on Taiwan, it's likely that swarms of drones would be a key
part of their invasion plan. These AI-driven drone fleets could easily outmatch Taiwan’s
defenses in ways that traditional weapons simply can’t, creating a truly alarming situation for
those trying to defend the island. Let’s explore how such an attack could unfold and why it
poses such a significant threat.

Phase 1: The First Wave – Electronic Warfare and Surveillance Drones

Before any troops land, China would likely deploy hundreds of small, stealthy drones to blind
and confuse Taiwan’s military.

How It Would Work:

• Jamming Drones: These drones would hover around Taiwanese radar stations,
sending out strong radio signals to mess with communications and early warning
systems.

• Decoy Drones: Bigger, missile-shaped drones would imitate fighter jets, tricking
Taiwan into wasting their pricey anti-air missiles.

• Spy Drones: Small drones, some no bigger than insects, would scout out defenses and
identify targets for future attacks

Why It’s Effective:

• Taiwan’s air defenses would be overloaded, unsure which targets are real.
• Radar systems could be disabled before the real attack even begins.

Phase 2: The Swarm Attack – Saturation Strikes on Key Defenses

Once Taiwan’s sensors are jammed, the next wave would strike military bases, missile sites,
and command centers.
57
Tactics China Might Use:

1. Kamikaze Drone Strikes


a. Hundreds of explosive-laden drones (like the CH-901) would crash into:
i. Radar stations ii. Anti-ship missile
launchers iii. Communication towers

b. Each drone costs just $15,000—cheaper than Taiwan’s defensive missiles.


2. Swarm Overload Tactics
a. A mix of 1,000+ drones flying in unpredictable patterns would:
i. Exhaust Taiwan’s missile supplies
ii. Force defenders to reveal hidden gun
positions
3. Follow-Up Missile Strikes
a. Once drone swarms weaken defences, ballistic missiles would hit larger targets.

Taiwan’s Biggest Problem:

• The U.S. estimates Taiwan’s air defences could be saturated in under 30 minutes by a
full-scale swarm attack.
• Even if 90% of drones are shot down, the remaining 10% could still destroy critical
infrastructure.

Phase 3: Invasion Support – Protecting Troops and Landing Zones


If China were to launch an amphibious assault, drone swarms would play a crucial role in
providing cover. How Drones Would Help:

• Protecting Landing Ships: Drones equipped with machine guns or grenades could
effectively suppress Taiwanese forces stationed on the beaches.

• Resupply Drones: Autonomous helicopters would be tasked with delivering


ammunition to Chinese troops on the ground.

• Suicide Drones: These would actively seek out Taiwanese special forces attempting
to disrupt the invasion.

58
Why This Is Terrifying for Taiwan:
• While human soldiers need rest, drones can operate around the clock.
• Taiwan’s urban defenses could be overwhelmed by swarms of tiny drones on the hunt
for resistance fighters

Can Taiwan Defend Against This? Current Weaknesses

Right now, Taiwan—and even the U.S.—is facing a tough battle against large drone swarms.
Biggest Challenges:

• Not Enough Anti-Drone Weapons.


• Taiwan has a few machine guns and jammers, but they just don’t have enough to
handle over 1,000 drones at once.

• The U.S. is hurrying to deploy microwave weapons that can take out multiple drones
in one go.

AI Outsmarts Humans

• Chinese drones are getting smarter with each attack—if one strategy doesn’t work, the
next wave quickly adapts.

• Taiwan’s human operators can’t keep up with that speed.


Swarm Tactics Are Hard to Stop

• Taking down drones one by one is simply too slow.


• While lasers and electronic warfare are helpful, they’re still not quite there yet.

How the U.S. Could Intervene (And How China Plans for It)

If the U.S. steps in to support Taiwan, it’s likely that China’s drone swarms would also set
their sights on American forces.
China’s Anti-U.S. Drone Strategies:

• Drone Submarines: These underwater drones could launch surprise attacks on U.S.
ships.
• Swarm vs. Aircraft Carrier: A massive swarm of drones could potentially breach a
carrier’s defenses, paving the way for missiles to strike.

59
• Cyber-Drones: Some drones might be equipped with hacking tools designed to disrupt
U.S. communications.
U.S. Countermeasures in the Works:

• Drone-Killing Lasers (Navy tests indicate they can take down dozens of drones
every minute).
• AI-Enhanced Air Defenses that can anticipate swarm movements. Drone vs. Drone
Combat – The U.S. is currently testing its own swarms to counter China’s.

The Worst-Case Scenario: A Fully Automated War

If drone swarms become too advanced, wars could be fought almost entirely by machines,
with humans just giving the first order.

Possible Consequences:

• Faster Escalation: AI-controlled swarms might attack before diplomats can stop a war.
• More Destruction: Cheap drones mean more attacks on cities.
• Who’s to Blame? If a drone swarm kills civilians, will China take responsibility?

The Case of China and the Great Firewall


AI and the Great Firewall
The Great Firewall of China stands as the largest and most advanced internet censorship
system in the world. It effectively blocks foreign websites, filters out specific keywords, and
limits VPN usage to keep citizens from accessing uncensored information. While it used to
rely on manual and static methods, the system has now embraced cutting-edge AI technology
to enhance its speed and flexibility (Creemers, 2017). With AI-driven natural language
processing (NLP), the firewall can decode complex online conversations and identify
subversive content, even when it's cleverly disguised or metaphorical. For instance, machine
learning models sift through massive amounts of social media data to forecast and block posts
that might express dissent (Qin, Strömberg, & Wu, 2017). Additionally, AI can spot VPN
usage by analyzing behavioral traffic patterns, effectively shutting down traditional ways to
bypass restrictions (Roberts, 2020). This integration of AI turns the Great Firewall into a
selfevolving censorship machine, one that adjusts in real-time to new online tactics and
speech trends, making state surveillance more invasive and efficient than ever.

60
AI in Military Cyber and Electronic Warfare: China's AI strategy reaches far beyond civilian
uses. The PLA’s Strategic Support Force (SSF) is responsible for executing integrated
operations in cyber, space, and electronic warfare. AI plays a crucial role in their vision of
“intelligentized warfare”—a concept where data processing, autonomous systems, and
decision-making algorithms are at the forefront of military operations (Kania, 2019).

Key areas of application include:

• Signal intelligence and interception: AI tools streamline the classification and analysis
of intercepted signals, enabling quicker threat assessments.

• Electronic jamming and spoofing: AI enhances electronic warfare by allowing


jammers to adjust in real-time based on enemy reactions (Medeiros, 2019).

• Cyberattack planning: Algorithms simulate potential attack outcomes, assisting military


planners in selecting the most effective cyber strategies.

• Information warfare: Generative AI can be used to create propaganda, fake news, and
deepfakes for psychological operations targeting domestic and foreign audiences
(Weedon et al., 2017).

The combination of these capabilities equips China with a multi-domain AI warfare


infrastructure, enabling asymmetric operations that can disrupt opponents without
conventional military conflict.

Strategic and Ethical Implications

The fusion of AI with cyber and electronic warfare capabilities raises significant concerns
globally:

• Authoritarian AI: China’s model of digital governance—marked by surveillance,


censorship, and behavior scoring—is increasingly being exported to other
authoritarian regimes (Feldstein, 2019).

61
• Cyber escalation risks: The use of AI in cyber operations introduces unpredictability.
AI-generated attacks or misinformation could spiral into large-scale conflict before
human decision-makers intervene.

• Erosion of global norms: As China advances its AI systems for digital warfare and
internal control, international norms on surveillance, digital rights, and cyber conduct
face increasing pressure.

• Disinformation threats: Generative AI can produce convincing fake audio, video,


and text, making it a potent tool for *disinformation warfare—both internally and
abroad (Tucker, 2019).

In the absence of international regulation, these developments may accelerate a global AI arms
race in cyberspace, where traditional rules of engagement no longer apply.

AI in Hypersonic Missile Guidance

What Are Hypersonic Missiles?


Hypersonic missiles are different from traditional ballistic missiles in several important ways:

1. Speed: Hypersonic weapons fly at incredible speeds—between Mach 5 and Mach


allowing them to reach distant targets in minutes (Sayler, 2021).
2. Maneuverability: Unlike traditional missiles that follow a fixed path, hypersonic
missiles can change direction mid-flight to dodge defenses.
3. Low altitude: These missiles often fly at lower altitudes, making them harder to
detect by radar systems.

Because of these features, hypersonic missiles are viewed as a game-changer in modern


warfare.

How AI Supports Hypersonic Missiles

Hypersonic missiles move so fast that humans cannot control them in real time. That’s why
they rely heavily on AI. AI systems can:

62
1. Make quick decisions using real-time data,
2. Adjust the flight path instantly if needed,
3. Identify and select targets during the mission.

Key AI functions in hypersonic missiles include:

1. Real-Time Navigation
e.g. - AI uses satellite data, sensors, and environment information to keep the missile on
course, even when conditions like wind or enemy jamming change (Lin & Singer,
2019).
2. Autonomous Targeting
e.g.- AI helps the missile identify and lock onto targets during flight. If the original
target moves or is destroyed, the AI can choose another.
3. Threat Avoidance
e.g.- When the missile detects radar signals or incoming interceptors, AI helps it make
fast adjustments to avoid being shot down (Kania, 2019).

AI makes these missiles much more intelligent, adaptable, and dangerous than traditional
weapons.

China’s Use of AI in Hypersonic Missile Programs

China has rapidly advanced its hypersonic weapons development. One of its most wellknown
hypersonic systems is the DF-ZF hypersonic glide vehicle, which can be launched from a
regular missile and glide toward a target at high speed. Reports also suggest China is working
on hypersonic cruise missiles using air-breathing engines, which are faster and more
maneuverable (Acton, 2021).
China uses AI in several ways in its hypersonic missile programs:

1. Simulation and Flight Optimization: Chinese engineers are leveraging AI to


explore countless flight paths in simulations, allowing the missile to choose the most
efficient and safest route (Kania, 2019).
2. AI-based Guidance Systems: The missile is trained through machine learning to
comprehend its environment and adapt to new information while in flight (Sayler,
2021).

63
3. Sensor Fusion: AI integrates data from various sensors—such as radar, infrared, and
GPS—to form a comprehensive view of the missile’s surroundings. This enables it to
“think” and react during its mission.

In 2021, China reportedly tested a hypersonic glide vehicle that was launched into low-Earth
orbit and then descended to hit a target. Experts suggest that the system utilized AI to
navigate over long distances and avoid detection (Acton, 2021). This development surprised
U.S. military officials, as it indicates that China might be leading the way in hypersonic
AIguided weaponry.

China’s AI-enhanced hypersonic missiles have serious implications:

1. Strategic Advantage: With its advanced and speedy missiles, China can hit U.S.
military bases, aircraft carriers, or even cities in just minutes—this gives them a
significant edge in any potential conflict.
2. Defense Limitations: Current missile defense systems like THAAD and Patriot aren't
equipped to handle the agile, high-speed hypersonic missiles (Sayler, 2021).
3. Rising Tensions: This new capability is ramping up tensions between China and
other global powers, particularly the U.S., which is also in a race to develop its own
hypersonic and AI-driven systems.

Ethical and Security Concerns

The integration of AI in weaponry brings up serious issues:

1. Loss of Human Control: When AI takes charge of targeting and navigation, it


diminishes human oversight. A single mistake or system glitch could result in
catastrophic consequences (Scharre, 2018).

2. Accidental War: AI-guided weapons might respond to incorrect data or misread


signals, potentially igniting a conflict by accident.

3. Lack of Regulation: There are currently no robust international guidelines governing


the use of AI in weaponry. This creates a precarious situation as nations hurry to
create and implement these systems (Feldstein, 2019).

64
Lack of Transparency in the PLA’s Use of AI: A Global Security Concern

The People’s Liberation Army (PLA) of China is rapidly emerging as one of the most
technologically advanced military forces in the world. They're pouring significant resources
into Artificial Intelligence (AI) to enhance their warfare capabilities. While these advancements
could bolster national defense, the opacity surrounding the PLA's development and
application of AI has sparked serious concerns globally. In democratic nations, discussions
about military AI advancements are often held publicly, allowing experts and citizens to
voice their questions and worries. On the flip side, China's approach is shrouded in secrecy,
tightly controlled by the Chinese Communist Party (CCP). This lack of transparency poses
risks not just for China, but for the entire globe (Allen, 2019).

How the PLA Is Using AI

China has set its sights on becoming the world leader in AI by 2030, and the PLA plays a
crucial role in that ambition. Some of the military AI initiatives in China include:

1. Autonomous Weapons: Think drones that can operate and engage in combat without
human pilots.

2. Smart Surveillance: Leveraging AI to recognize faces, monitor individuals, and


anticipate “suspicious” activities.

3. Cyber Warfare: Employing AI to uncover vulnerabilities in enemy computer


systems or to safeguard its own networks.

4. AI for Decision-Making: Utilizing big data and machine learning to inform military
strategies at a pace that outstrips human capabilities.

These technologies are being developed through a mix of state-funded research institutions,
universities, and private tech companies, many of which collaborate closely with the
government and military under the "civil-military fusion" policy (Kania, 2019; Feldstein,
2019)

Why China Hides Its Military AI Projects

There are several strategic and political reasons behind China’s secrecy:
65
1. Strategic Secrecy: Gaining a military edge often hinges on the *element of surprise.
If other nations are fully aware of the weapons or technologies China is working on,
they can easily develop countermeasures. This is why secrecy plays a crucial role in
safeguarding military strategies and preserving a psychological advantage (Allen,
2019).
2. Internal Political Control: The Chinese Communist Party (CCP) exercises strict
control over public information. Disclosing too much about autonomous weapons,
surveillance systems, or cyber warfare could ignite ethical discussions or protests—
something the Chinese government is keen to avoid (Feldstein, 2019).
3. Propaganda and Power Projection: The People’s Liberation Army (PLA) might
exaggerate or conceal certain technologies to shape global perceptions. They may
boast about advancements that aren’t fully realized or keep failures under wraps to
uphold the image of a formidable and advanced military (Binnendijk et al., 2020).

The Dangers of Secrecy in Military AI

The lack of transparency regarding the PLA's use of AI poses significant risks to global peace
and stability:

1. Accidental Escalation: If an AI-operated weapon is deployed and no one knows who


is behind it or the reasoning, other nations might retaliate, assuming it was a deliberate
attack (Scharre, 2018).

2. Arms Race: The U.S. and other countries could ramp up their own AI weapon
initiatives out of fear, triggering a global arms race where no one can afford to hit the
brakes—even if it becomes perilous (Sayler, 2021).
3. Ethical and Legal Concerns: Without transparency, it’s impossible to determine
whether China’s AI weapons comply with international law—like the principle that
only humans should make decisions about using lethal force (Scharre, 2018).
4. No Global Accountability: If the PLA employs AI for surveillance or cyberattacks,
and there’s no way to trace or question it, holding anyone accountable becomes a
challenge, undermining the global rules-based order (Allen, 2019).

66
How Other Countries Handle AI Transparency

Countries like the United States, the United Kingdom, and members of the European Union
have more defined frameworks for discussing military AI. Their governments release reports,
encourage public discussions, and allow independent researchers to evaluate the ethics
surrounding military AI initiatives. For instance:

• The U.S. Department of Defense has laid out AI principles that mandate human
oversight over lethal systems.

• NATO is currently exploring how to ensure that AI applications align with


international humanitarian law (Binnendijk et al., 2020).

While these systems aren't flawless, they do help alleviate concerns and foster international
trust. On the flip side, China’s People’s Liberation Army (PLA) operates without independent
oversight, with AI development entirely under the control of the Chinese Communist Party
(CCP), which stifles public or academic critique (Feldstein, 2019).

International Reactions and Policy Recommendations

Countries and global organizations are increasingly vocal about the dangers posed by
nontransparent military AI programs:

• The United Nations Group of Governmental Experts (GGE) on Lethal Autonomous


Weapons has urged for clear regulations and human accountability.
• The U.S., Japan, and Australia have highlighted the necessity of "responsible AI use"
in military applications.

• Think tanks and NGOs are calling on China to engage in global discussions and to
embrace transparency and verification measures (Allen, 2019).

What should be done?

1. China should join global AI treaties: Ongoing discussions are taking place to either
ban or regulate autonomous weapons. It’s crucial for China to be involved in these
talks to help establish common guidelines.

67
2. Implement international audits: Similar to nuclear weapons inspections, international
teams could evaluate AI systems to ensure safety and control.
3. Set ‘Red Lines’ for AI: China and other major powers could agree to never develop
AI systems that function without human oversight—especially in critical areas like
nuclear systems or life-and-death situations.

International Concerns Over AI-Enabled Surveillance in China

One of the most frequently discussed countries in this region is China. The Chinese
government has developed one of the globe's most sophisticated and extensive AI
surveillance systems. These systems serve not just for public safety but also for keeping an
eye on political dissent, religious groups, and ethnic minorities. Consequently, numerous
governments, human rights organizations, and international experts have voiced their
concerns about China's approach to AI surveillance.

What Is AI-Enabled Surveillance?

AI-enabled surveillance refers to the use of technology to automatically gather and analyze
information about individuals. This encompasses:

• Facial recognition: Identifying individuals through cameras.


• Voice recognition: Associating voices with specific people.
• Behavior analysis: Anticipating actions based on observed patterns.
• Location tracking: Utilizing GPS, phone data, or street cameras to monitor people's
movements.
In China, these technologies are integrated into vast networks of smart cameras and AI
systems. They are deployed in cities, public transportation, schools, and workplaces, making
it nearly impossible for citizens to escape being observed.

China’s AI Surveillance in Practice

• The Chinese government employs AI surveillance for various reasons:


• Public Security: AI-equipped cameras are utilized to apprehend criminals and
maintain order in busy areas.

• Social Control: The system monitors political protests and online comments that
criticize the government.

68
• Ethnic Monitoring: In regions like Xinjiang, AI tools are used to track the movements
and behaviors of Uyghur Muslims, often without any transparent legal framework
(Feldstein, 2019).

Many cities in China are under the watchful eye of the “Skynet” system, which connects
millions of cameras to facial recognition software. Additionally, the Social Credit System
assigns scores to citizens based on their behavior, with AI keeping tabs on any rule-breaking
or non-compliant actions.

International Concerns and Criticism

Countries and international organizations have voiced numerous concerns regarding China’s
AI surveillance:

1. Privacy Violations

In most democracies, the right to privacy is a fundamental belief. However, in China, there’s
no clear law that safeguards personal data from government exploitation. AI systems gather
vast amounts of data from citizens, often without their knowledge or consent (Feldstein,
2019).

2. Human Rights Abuses

In Xinjiang, AI plays a crucial role in a system designed to detain Uyghur Muslims,


subjecting them to constant monitoring and restricting their movements and religious
practices. The United Nations and various human rights organizations have labeled this as a
form of digital repression (HRW, 2019).
3. Exporting Surveillance Technology

China isn’t just applying this technology domestically; it’s also selling or donating AI
surveillance tools to countries across Asia, Africa, Latin America, and the Middle East. These
tools frequently end up in the hands of authoritarian regimes, which may use them to stifle
political dissent (Feldstein, 2019).

4. Lack of Transparency and Oversight

In China, there’s no room for public discussion, legal challenges, or independent media
scrutiny regarding the use of surveillance systems. This lack of transparency means there’s no
straightforward way to hold the government accountable if individuals are wrongly targeted
or mistreated (Mozur, 2019).

69
How the World Is Responding

Several governments and organizations are taking action to counter China’s surveillance
practices:

1. U.S. Sanctions: The U.S. has imposed export restrictions on Chinese tech companies
involved in surveillance, such as Hikvision and SenseTime (U.S. Department of
Commerce, 2021).
2. Human Rights Reports: Organizations like Human Rights Watch and Amnesty
International consistently release reports to raise awareness and pressure China to
adhere to global standards.
3. Technology Bans: Some nations are banning or restricting the use of Chinese
surveillance products in public security systems due to security and ethical concerns.
4. International Regulations: The UN and EU are discussing ways to develop global
guidelines for AI use in surveillance, with a focus on human rights and transparency.
Why Global Rules Are Needed?

AI surveillance is spreading fast, and China is leading the way. If no rules are made, other
countries might copy these systems, creating a world where governments watch everyone all
the time. This would be a serious threat to:

• Freedom of speech
• Freedom of movement
• Freedom of religion
International agreements are urgently needed to define how AI can be used for surveillance
— without violating basic rights.

70
Chapter 5

The Two Giants: Unpacking the USA–China Rivalry

The Future of War: USA and China’s Race for AI Supremacy


1. The United States: Leadership through Innovation and Military Integration

The United States has established itself as a powerhouse in technological innovation,


especially when it comes to artificial intelligence (AI) in military contexts. With a staggering
defense budget of $886 billion for 2024 (Congressional Budget Office, 2023), the U.S. has
effectively harnessed both public and private sectors to speed up AI advancements. The
Department of Defense (DoD) has rolled out several significant AI initiatives through the
Joint Artificial Intelligence Center (JAIC), which is now part of the Chief Digital and
Artificial Intelligence Office (CDAO). These organizations are dedicated to weaving AI into
various aspects of military operations, including battlefield strategies, logistics, cybersecurity,
and autonomous weapon systems.

In 2021, General Paul Nakasone, who was then leading U.S. Cyber Command and the NSA,
pointed out that “AI and machine learning are indispensable tools for enhancing national
security and deterring adversaries in cyberspace” (Nakasone, 2021). DARPA (Defense
Advanced Research Projects Agency), renowned for its groundbreaking military
technologies, has poured significant resources into AI-focused projects like the “AI Next”
campaign, which emphasizes contextual reasoning, human-AI collaboration, and explainable
AI. These investments are aimed at ensuring the U.S. maintains its edge on the battlefield and
strategic front.

Moreover, major American tech companies—such as Google, Microsoft, Palantir, and


Amazon—play crucial roles as partners in AI research and military applications. Despite
facing some ethical concerns, these partnerships have produced robust data processing tools,
predictive analytics systems, and real-time decision-making platforms (Kania, 2019). A
notable example of the U.S.'s practical use of AI in military intelligence is Project Maven, a
DoD initiative that employs AI for image recognition in drone footage.

71
2. China: Strategic Ambition and Military-Civil Fusion

China has really stepped up as a major player in AI warfare, thanks to a strategic, long-term
approach led by the government. President Xi Jinping has called AI a “key driving force”
behind China’s modernization, emphasizing that “whoever controls AI will control the future
of global development and security” (Xi, 2018). The “Next Generation Artificial Intelligence
Development Plan,” which came out in 2017, sets an ambitious goal for China to become the
world leader in AI by 2030, with significant implications for military use.

A key part of China’s strategy is the concept of Military-Civil Fusion (MCF), which allows
breakthroughs in civilian AI to be quickly adapted for military purposes. This means the
People’s Liberation Army (PLA) can take advantage of advancements from commercial AI
companies like Baidu, Huawei, SenseTime, and iFlytek. These companies are at the forefront
of developing technologies like facial recognition, drone swarms, and autonomous
surveillance systems that are used both at home and in military operations (Allen, 2019).

China has poured resources into autonomous technologies, including unmanned aerial
vehicles (UAVs), underwater drones, and AI-enhanced command and control systems. In
2020, the PLA even tested autonomous ground vehicles during joint exercises near the Indian
border, demonstrating how quickly they’re integrating AI (Lin & Singer, 2020). Plus, China
is leading the world in AI-related patents, showcasing a robust technical foundation.
According to the World Intellectual Property Organization (2023), China accounted for
nearly 60% of all AI patent filings from 2018 to 2022.

Even with all this progress, China’s military AI systems still struggle with real-time
decisionmaking, ethical issues, and operational integration when compared to U.S. systems.
However, the centralized governance in China allows for faster development and deployment,
sidestepping some of the ethical debates that can slow down U.S. programs.

3. Comparative Strengths and Global Implications

When you look at the United States and China side by side, it’s clear that both countries bring
unique strengths to the table when it comes to AI warfare. The U.S. has a notable advantage
in cutting-edge AI research, seamlessly integrating defense strategies, and drawing
innovation from top-tier universities and tech companies. American companies are at the
forefront of foundational AI research—think deep learning frameworks, natural language
processing, and

72
computer vision—which are essential for building sophisticated military AI systems
(National Security Commission on Artificial Intelligence, 2021).

On the flip side, China shines in its ability to deploy technology quickly and efficiently,
thanks to a government-driven approach. With minimal regulatory hurdles and a cohesive
national strategy, China can swiftly prototype and scale AI-driven systems. As former Google
CEO Eric Schmidt pointed out, “China’s government has a clear plan to overtake the U.S. in
AI by 2030—and they’re executing on it with incredible focus” (Schmidt, 2020).

One area where China is making impressive strides is in the realm of autonomous drone
swarms and electronic warfare. Recent tests have shown that China can effectively coordinate
AI-controlled drones, potentially overwhelming traditional defense systems (Kania &
Costello, 2020). Meanwhile, the U.S. is focusing on the ethical development of AI, as
highlighted by the Pentagon’s adoption of five ethical AI principles in 2020, which include
reliability, governability, and traceability (U.S. Department of Defense, 2020).

Both nations understand that AI is a crucial battleground for future conflicts. The Pentagon’s
2023 report to Congress clearly states that “AI, alongside cyber and space technologies, will
determine the outcome of future warfare” (DoD, 2023). Similarly, China’s 2023 defense
white paper emphasizes that "intelligentization" is a key goal for military modernization,
underscoring the importance of AI in their strategy.

Strategic Differences in AI Warfare: Data-Driven Precision vs.

SwarmCentric Dominance

1. The United States: Data-Driven Precision and Human-Centric AI

When it comes to AI in warfare, the United States takes a strategic route that prioritizes
datadriven precision, human-AI collaboration, and accountability. At the heart of this
approach is the seamless integration of AI into decision-support systems, logistics, and
precision targeting, ensuring that AI enhances human command instead of overshadowing it.
The U.S. Department of Defense (DoD) has made it clear that AI should always operate
under “appropriate levels of human judgment,” showcasing a careful and ethical
perspective on its deployment (DoD, 2020).

73
In the military context, AI is mainly utilized through predictive analytics, battlefield
awareness systems, and cutting-edge reconnaissance tools. One standout initiative, Project
Maven, leverages AI to sift through drone footage, identifying potential targets for human
operators to review (Allen, 2019). This highlights the U.S. commitment to augmented
intelligence AI that empowers human decision-makers rather than replacing them. During a
Senate hearing, former Google CEO Eric Schmidt, who led the National Security
Commission on Artificial Intelligence (NSCAI), emphasized, “The U.S. must lead in the
development of responsible, data-rich AI ecosystems to maintain battlefield advantage”
(Schmidt, 2020).

The Pentagon’s focus on data integrity, interoperability, and real-time feedback loops
underscores its dependence on secure, well-structured data. Initiatives like the Joint
AllDomain Command and Control (JADC2) illustrate this, as they integrate various data
streams from air, land, sea, cyber, and space to create a unified decision-making network
(Freedberg, 2021). JADC2 utilizes AI to filter, prioritize, and relay mission-critical
information across different services—boosting precision and cutting down on decision-
making delays.

Even with its sophisticated infrastructure, the U.S. AI strategy faces limitations due to ethical
guidelines and bureaucratic oversight, which can sometimes hinder its full potential.
However, these limitations are intentional safeguards to prevent misuse and maintain
alignment with democratic values.

2. China: Swarm Intelligence and Autonomous Systems at Scale

China's approach to AI in warfare is all about swarm intelligence and autonomous systems,
pushing the boundaries of battlefield automation. Drawing inspiration from nature, their idea
of "intelligentized warfare" revolves around unmanned platforms that work together, often
with little to no human oversight. The Chinese People's Liberation Army (PLA) has poured
significant resources into developing swarming drone technology, underwater autonomous
vehicles, and robotic systems that leverage AI to operate seamlessly across various domains
(Kania & Costello, 2020).

A striking example of this strategy was showcased in a 2021 exercise where more than 200
AIdriven drones executed coordinated attack formations in simulated combat scenarios.
These swarms are capable of real-time communication, adapting to threats, and performing
synchronized maneuvers, which can easily outmatch traditional defense systems (Lin &
Singer, 2021). The China Academy of Electronics and Information Technology (CAEIT)

74
has noted that these systems symbolize “the future of warfighting—decentralized,
autonomous, and unpredictable” (CAEIT, 2021).
Additionally, China's strategy is shaped by the Military-Civil Fusion (MCF) policy, which
promotes the swift integration of commercial AI innovations into military applications.
Companies like DJI, Hikvision, and Baidu are key players in developing swarm
technologies, facial recognition systems, and autonomous navigation software. Unlike the
U.S., China tends to prioritize speed over ethical considerations, allowing for quicker
deployment of experimental systems (Allen, 2019).

President Xi Jinping has consistently highlighted the importance of AI and automation in


the PLA's modernization efforts. In a 2020 speech to the Central Military Commission, he
declared: “We must seize the commanding heights of future military competition
through unmanned and intelligent technologies” (Xi, 2020). This underscores China's
ambition to surpass traditional military structures by harnessing AI in large, adaptable
formations. While China's swarm strategy offers impressive scale and speed, it also comes
with its own set of risks. The idea of autonomy on such a large scale raises important
questions about how well systems can be controlled, the potential for unintended escalations,
and how reliable swarm logic can be in the face of electronic warfare. Still, it seems the PLA
is ready to embrace these trade-offs in order to create strategic disruptions against more
organized opponents like the United States.

3. Strategic Comparison: Speed vs. Safety, Autonomy vs. Control

At the heart of the U.S.-China AI rivalry lies a fundamental tension between speed and
control. The U.S. approach values data accuracy, ethical oversight, and collaboration between
humans and AI, while China focuses on scale, autonomy, and disruptive tactics on the
battlefield. These differing strategies are influenced by their political systems: the U.S. is
accountable to public and legislative scrutiny, whereas China's centralized governance allows
for swift implementation without public discussion.

The strength of the U.S. military is found in its advanced command structures, which
seamlessly integrate AI into operations across multiple domains. Systems like JADC2
improve cooperation among forces and allies, fostering coalition warfare that prioritizes
precision and situational awareness (NSCAI, 2021). On the flip side, China's asymmetric
strategy seeks to overwhelm these systems through nonlinear tactics, employing AI
swarms, deception, and electronic disruption.

75
Dr. Elsa Kania, a prominent expert on China's military AI, points out, “The Chinese
military sees AI not merely as a tool, but as a game changer that can redefine the nature of
warfare itself” (Kania, 2020). Meanwhile, American defense officials caution that if the U.S.
fails to keep pace with China's rapid AI integration, it could face “technological surprise” in
future conflicts (DoD, 2023).

In the end, these strategic differences indicate that while the U.S. and China might achieve
similar technological advancements, their applications will likely diverge significantly. The
U.S. is expected to keep focusing on responsible AI in warfare, while China may lean
towards autonomous saturation and quick disruption as a means to achieve military equality
or dominance.

Vulnerabilities and Risks in AI Warfare: Hacking and Adversarial Threats


in the U.S. and China

1. United States: Advanced Systems, Complex Exposure

The United States stands at the forefront of AI development, but it also grapples with some
serious vulnerabilities tied to the intricate web of AI systems used in military networks. The
use of AI in areas like surveillance, logistics, command-and-control, and autonomous
vehicles opens up new avenues for potential attacks from adversaries. A major concern is
adversarial machine learning, where malicious actors can manipulate AI algorithms by
feeding them deceptive inputs, leading to incorrect decisions (Brundage et al., 2018).

A clear example of this is adversarial image perturbation, which involves making slight
changes to visual inputs to trigger misclassification. In a military setting, this could result in
an AI system mistakenly identifying a threatening tank as a civilian vehicle or the other way
around. The Pentagon’s Defense Innovation Board has recognized these dangers, stressing
the importance of robust testing and red-teaming for AI models (Defense Innovation Board,
2019).

Additionally, the dependence on shared cloud infrastructure and interconnected battlefield


platforms like those under the Joint All-Domain Command and Control (JADC2)
initiativeputs
U.S. systems at risk of supply chain attacks and embedded malware. The 2021 SolarWinds
cyberattack, while not specifically targeting AI, showcased how advanced adversaries could
breach defense contractors and software supply chains utilized by the Pentagon (Smith,
2021). AI systems linked to these networks could be compromised, altering their behavior
76
under certain conditions.

77
Retired U.S. Air Force General Jack Shanahan, who once led the Joint Artificial
Intelligence Center (JAIC), cautioned that “AI systems are only as secure as their weakest
digital links— and we need to treat AI security as national security” (Shanahan, 2020). In
light of these challenges, the Department of Defense has initiated AI-specific red-teaming and
resilience programs through the Test and Evaluation Directorate under the Chief Digital and
AI Office (CDAO).

2. China: Rapid Deployment, Operational Risks, and Systemic Blind Spots

China's approach to AI focuses on autonomous decision-making, swarm intelligence, and


quick integration. However, this bold strategy comes with significant trade-offs in terms of
cybersecurity and reliability. The absence of strict oversight and open criticism within
China's system can heighten vulnerabilities, as AI models might be rolled out in real-world
situations without undergoing comprehensive stress testing (Kania & Costello, 2020).

One major worry regarding Chinese AI systems is their vulnerability to adversarial data
poisoning. With the vast amounts of surveillance data gathered from facial recognition
networks and social monitoring platforms, the threat of data injection attacks where small
segments of training data are deliberately corrupted is quite high. These attacks can
undermine entire neural networks, often going unnoticed until a failure occurs (Zhou et al.,
2020).

Moreover, China's strategy of military-civil fusion blurs the lines between civilian and
military AI infrastructures. While this can speed up innovation, it also raises the stakes for
espionage, malware insertion, and systemic cross-contamination from less secure civilian
platforms. For instance, iFlytek, a prominent AI company known for its voice recognition
technology, has faced accusations of using insecure software that is susceptible to
surveillance interception (Human Rights Watch, 2019).

Chinese AI systems also struggle with limited testing environments due to censorship and
restricted data sharing. This limitation raises concerns that PLA systems might be exposed to
unknown edge cases or could fail in real combat situations when faced with unconventional
or deceptive inputs. A 2021 report from the Chinese Academy of Sciences noted that “current
AI systems exhibit low tolerance to adversarial deception and perform poorly under
ambiguous operational conditions” (CAS, 2021).
Moreover, China's significant investment in autonomous drone swarms brings about some
distinct vulnerabilities. The AI that powers these swarms depends on decentralized
communication, which can be jammed, spoofed, or hacked. This could lead to drones being

78
turned against their own operators or becoming ineffective during missions. If these systems
lack robust encryption and backup protocols, they could easily be taken over or disabled by
more advanced technological adversaries.

3. Comparative Landscape: Security Ethics vs. Deployment Urgency

While both countries grapple with significant AI vulnerabilities, their strategies for
addressing these issues are quite different. The United States leans towards a security-first,
ethics-driven approach, often opting to slow down deployment to ensure thorough
verification and safety. This involves comprehensive third-party audits, initiatives for
algorithmic transparency, and adversarial simulations. However, this cautious stance can
sometimes hinder innovation and provide an advantage to competitors who move more
quickly.

In contrast, China focuses on speed and scalability, frequently rolling out systems even before
they are fully secure or reliable. The decentralized nature of its AI swarms and automated
platforms can heighten operational risks, particularly in high-stress electromagnetic or cyber
environments (Lin & Singer, 2022).

Yet, China's centralized control and closer integration of military and civilian AI can facilitate
rapid responses and coordinated enhancements when vulnerabilities are identified. From a
geopolitical perspective, hacking, manipulating, and spoofing AI systems are now viewed as
strategic acts of warfare. The 2023 U.S. National Defense Strategy categorizes “AI system
compromise” as a major cyber threat, comparable to assaults on nuclear commandand-
control systems (DoD, 2023). Similarly, China’s 2022 White Paper on Military-Civil Fusion
recognizes AI infrastructure as a "national strategic asset that requires protection across
multiple domains" (PRC MoD, 2022).

Dr. Paul Scharre, an AI expert at CNAS, captured the dilemma succinctly: “We are entering
an era where an adversary may not need to destroy your system—they only need to corrupt
your data or trick your model, and the result could be catastrophic” (Scharre, 2022).

This shifting threat landscape compels both nations to emphasize secure architectures,
resilience against adversarial attacks, and the establishment of international norms for AI in
warfare. However, their current paths indicate very different levels of risk tolerance and
response strategies.

79
Chapter 6

Global Implications and Ethical Dilemmas in AI Warfare


As artificial intelligence (AI) continues to evolve, nations are increasingly leveraging it to
create new weapons and military technologies. While AI can enhance the speed and
intelligence of armed forces, it also poses significant global threats. The integration of AI in
warfare brings up tough ethical dilemmas, such as who is accountable when things go awry,
whether machines should have the authority to make life-and-death choices, and how to avert
unintended conflicts triggered by AI. Many countries and international bodies, including the
United Nations, are currently engaged in discussions about how to tackle these challenges.

1. UN Talks on Prohibiting Lethal Autonomous Weapons

The United Nations (UN) has taken a leading role in global efforts to regulate or outright ban
lethal autonomous weapons systems (LAWS). These systems are capable of identifying and
attacking targets without any human intervention. A prime example is drones that utilize AI
to autonomously locate and eliminate targets. Back in 2013, the nonprofit group Human
Rights Watch, along with the Harvard Law School’s International Human Rights Clinic,
kicked off a campaign known as the “Campaign to Stop Killer Robots.” This initiative
sparked international dialogue at the UN, particularly within the framework of the
Convention on Certain Conventional Weapons (CCW).
Since then, the CCW has convened several meetings where diplomats, scientists, and military
professionals have engaged in discussions about the dangers posed by autonomous weapons.
A consensus is emerging among most nations that human oversight is essential when it comes
to weaponry, especially in situations involving lethal force. However, a universal agreement
on banning or rigorously regulating autonomous weapons has yet to be reached.

In 2021, UN Secretary-General António Guterres advocated for a ban on fully autonomous


weapons, stating: “Machines with the power and discretion to take lives without human
involvement are politically unacceptable, morally repugnant, and should be prohibited by
international law” (Guterres, 2021). Several nations, like Austria, Brazil, and Mexico, are
pushing for a complete ban. On the flip side, countries such as the United States, Russia, and
China believe that more discussions are necessary and would rather concentrate on creating
guidelines instead of binding treaties (CCW Report, 2022).

80
Even though there’s no agreement yet, the momentum is definitely building. In 2023, over 70
countries took part in UN discussions in Geneva, with many calling for quicker action. A
report from the Stockholm International Peace Research Institute (SIPRI) warns that “failure
to regulate these systems could lead to uncontrolled arms races and greater risks of conflict”
(SIPRI, 2023).

2. Risk of AI-Driven Accidental Wars

One of the most significant threats posed by AI in warfare is the potential for accidental wars.
AI systems can act swiftly and sometimes in unpredictable ways. They might misinterpret
information, especially in complex and rapidly changing scenarios.

For instance, an AI system could confuse a training exercise for a real attack and respond
aggressively. If another nation reacts to that, it could spiral into a war, even though no one
intended for it to happen.

A 2020 study by the RAND Corporation cautioned that AI could “reduce decision time in
conflict situations to levels where human oversight is impossible,” raising the risk of
“catastrophic escalation due to misunderstanding or malfunction” (RAND, 2020). AI systems
learn from past data, but real-world conflicts can shift quickly. This means AI might make
decisions based on outdated or biased information. Additionally, adversaries could attempt to
deceive AI systems by providing false data—this tactic is known as an adversarial attack.
The accidental drone strike in Afghanistan in 2021, which tragically killed ten civilians,
including children, highlighted how reliance on technology can lead to devastating outcomes,
even when human operators are involved (New York Times, 2021). While this particular
incident didn’t involve AI, experts warn that similar risks will increase as AI systems take on
more decision- making roles.

3. The Need for International AI Warfare Treaties

To avoid dangerous mishaps and the misuse of AI in warfare, many experts argue that we
urgently need international treaties. These agreements could establish guidelines for when
and how AI can be deployed in weaponry, ensuring that humans remain accountable for
crucial decisions.

Unlike nuclear weapons, which are governed by treaties like the Non-Proliferation Treaty
(NPT), there are currently no global agreements in place for AI weapons. This lack of

81
regulation allows countries to develop autonomous weapons without disclosing their
functionalities or the limitations they adhere to. A 2020 report from the International
Committee of the Red Cross (ICRC) emphasized that “AI should not be allowed to remove
human responsibility from decisions to use lethal force,” advocating for robust international
laws to guarantee accountability and adherence to humanitarian law (ICRC, 2020).

The European Union has also begun to take regulatory steps. In 2021, the European
Parliament passed a resolution calling for a ban on “killer robots” and insisted on full human
oversight of AI military systems (European Parliament, 2021). The resolution highlighted that
“the decision to take a human life must never be delegated to a machine.” Many scholars and
ethicists back the concept of “meaningful human control.” This principle asserts that even if
AI assists in decision-making, a human must always review and approve those decisions.
Without this safeguard, it becomes unclear who would be held accountable if something goes
awry.

There are also worries that nations might exploit AI systems to launch cyberattacks or
manipulate information. For instance, AI could create deepfakes—realistic-looking fake
videos—to disseminate propaganda or incite panic. If such actions occur during a politically
charged moment, they could easily escalate into conflict. Since AI warfare impacts everyone,
including civilians and neutral nations, global collaboration is essential. Treaties could also
help mitigate the risk of an AI arms race among the United States, China, and other major
powers.

4. Ethical Questions and Responsibility

The use of AI in warfare brings up a host of ethical dilemmas. A key concern is about
accountability. If an AI system makes a grave error and causes the death of innocent people,
who should be held responsible? Is it the developer of the system? The military leader who
authorized its use? Or the nation that deployed it? In conventional warfare, we can hold
human soldiers and commanders accountable for their actions. However, with AI, tracing
responsibility becomes much more complicated. This ambiguity could allow nations to
sidestep accountability or evade consequences. Many ethicists contend that machines should
never be entrusted with the power to decide who lives or dies. Philosopher Peter Asaro refers
to this as the “moral hazard of delegating killing to machines” (Asaro, 2012). Others express
concern that employing AI in lethal situations dehumanizes warfare and could lower the
threshold for engaging in conflict. Religious organizations, peace advocates, and civil
society groups are

82
joining forces with scientists to advocate for strict regulations on AI weaponry. They
emphasize that humanity must not permit machines to take control of such critical decisions.

5. Moving Forward: Building Global Norms

While it may take time to establish a legally binding treaty, experts believe it’s still feasible to
cultivate “norms” regarding AI usage. These norms are shared expectations or guidelines that
countries adhere to, even in the absence of formal agreements. For instance, many nations
already observe informal protocols against using chemical weapons or targeting hospitals
during conflicts. Similar norms could be established for AI, such as refraining from using AI
to target civilians or steering clear of fully autonomous weapons in complex scenarios.

International gatherings, academic symposiums, and think tanks are playing a crucial role in
shaping these norms. For example, the Global Partnership on AI (GPAI) and the Future of
Life Institute have put forth recommendations for the safe development of AI. The GPAI
comprises over 20 member countries, including the US, India, and the EU. Public awareness
and pressure can also make a difference. As with climate change and nuclear weapons, global
citizens and advocacy groups can push their governments to act responsibly.

83
Chapter 7

Conclusion & Future Outlook

1. Summary of Key Findings

This study has delved into the swift evolution of artificial intelligence (AI) within military
settings, shedding light on the ethical, strategic, and political hurdles that come with its use.
Here are some key takeaways:

First off, the United States and China are at the forefront of the global race to develop AI
warfare technologies, each taking a unique path. The U.S. prioritizes precision, the ethical
integration of AI, and collaboration between humans and machines, while China leans
towards speed, autonomous drone swarms, and a centralized approach through its
MilitaryCivil Fusion strategy.

Secondly, AI systems are not without their vulnerabilities; they can be hacked, manipulated,
or fail altogether. Both the U.S. and China are grappling with cybersecurity threats, especially
as they increasingly rely on autonomous systems that operate at machine speed, often
outpacing human oversight. These weaknesses raise the stakes for accidental conflicts,
particularly in high-pressure or unclear situations.

Thirdly, the ethical questions surrounding lethal autonomous weapons (LAWs) remain a
contentious issue. There’s no global consensus on this matter. While over 70 countries and
the United Nations have pushed for binding agreements to limit or ban these weapons, major
military powers like the U.S., China, and Russia tend to favor voluntary guidelines instead of
strict legal regulations.

Lastly, the absence of international governance frameworks for AI creates a regulatory gap.
Without global treaties in place, countries might continue to develop AI weaponry
unchecked, raising the risk of an arms race, escalation of AI capabilities, and potential misuse
in cyber conflicts.

These insights underscore the pressing need for ethical leadership, coordinated diplomatic
efforts, and strong global policies.

84
2. Predictions for AI Warfare (2030–2050)
The next twenty years are poised to bring significant changes to warfare, as AI transitions
from a supportive role to a central player in defense strategies. Several trends are anticipated
to shape the future of AI warfare.

A. The Rise of Fully Autonomous Combat Systems

By 2030, we can expect many countries to roll out lethal autonomous drones, land robots, and
underwater vehicles that operate without real-time human supervision. Research from the
Stockholm International Peace Research Institute (SIPRI, 2023) shows that over 50 nations
are either developing or testing these systems.

China is already experimenting with AI-driven drones that can launch coordinated attacks in
swarms. Meanwhile, the U.S. is pushing forward with autonomous platforms through
initiatives like DARPA’s OFFSET (Offensive Swarm-Enabled Tactics), which aims to
deploy 250 or more drones in urban settings (DARPA, 2021).

Looking ahead to 2040–2050, AI systems could take the lead in certain battlefield scenarios,
making machine-on-machine combat a reality. This evolution brings up important questions
about how quickly decisions can be made, how reliable they are, and the lack of human moral
judgment in high-pressure situations.

B. Real-Time AI Command and Strategic Decision Support

Military strategies will increasingly depend on AI-enhanced platforms to handle logistics,


battlefield awareness, threat detection, and strategy development. The U.S. military’s Joint
AllDomain Command and Control (JADC2) initiative is working towards creating a
connected “internet of military things” for real-time operational decisions (Freedberg, 2021).
In a similar vein, China’s vision of “intelligentized warfare” includes AI-assisted war rooms
and algorithmdriven decision-making at the highest levels (Kania, 2021).

By 2050, military planning might be shaped more by predictive modeling and data
simulations than by generals in strategy rooms.

85
C. Algorithmic Arms Races and Cyber-Conflict

AI will also be weaponized in the realm of cyberwarfare, used for disinformation campaigns,
manipulating satellite networks, or spoofing enemy systems. The danger of “black box”
decision-making—where the outcomes are opaque and unexplainable—could become a
significant risk, particularly in nuclear command and control scenarios. A report from the
RAND Corporation raises a serious concern: algorithmic escalation—where autonomous
systems might accidentally spark a war due to their preprogrammed responses—could pose
one of the biggest threats by 2040 (RAND, 2020).

3. Policy Recommendations for Global AI Governance

To steer clear of the disastrous potential of AI warfare, the global community needs to take
action right away. Here are some solid policy recommendations aimed at fostering safety,
accountability, and peace.

A. Create a Legally Binding Treaty on Lethal Autonomous Weapons

Just like chemical and biological weapons, lethal autonomous weapons should be regulated or
even banned under international law. The United Nations and its member states should build
on the CCW framework to establish a binding agreement that ensures humans maintain
meaningful control over the use of lethal force. With over 30 countries already backing a ban,
it’s time to turn that momentum into real treaties.

B. Clarify “Meaningful Human Control” in Warfare


Policymakers need to guarantee that humans are always legally and ethically responsible for
life-and-death decisions. We must develop clear standards that define what adequate human
oversight looks like in automated weapon systems. This should include the ability for
realtime intervention and tracking responsibility throughout the development, deployment,
and operational phases.

C. Encourage Transparency and International Testing Protocols

Countries should implement confidence-building measures, such as transparent testing of AI


systems, shared safety metrics, and mandatory reporting for new AI-enabled military
systems. This approach would help reduce suspicion and prevent miscalculations during
high-stakes geopolitical situations.

86
Bibliography
• Congressional Budget Office. (2023). The U.S. Military Budget in Historical
Context.
https://www.cbo.gov .
• Kania, E. B. (2019). Battlefield Singularity: Artificial Intelligence, Military
Revolution, and China’s Future Military Power. Center for a New American Security.

Nakasone, P. M. (2021). Remarks at the National Security Commission on
Artificial Intelligence.
• Allen, G. (2019). Understanding China’s AI Strategy: Clues to Chinese
Strategic Thinking on Artificial Intelligence and National Security. Center for a
New American Security.

• Lin, H., & Singer, P. W. (2020). China’s AI-Powered Warfare: Are We Ready
for an AI Arms Race? War on the Rocks.

• Xi, J. (2018). Speech at the National Science and Technology


Innovation Conference.
• World Intellectual Property Organization. (2023). AI Technology
Patent Statistics. https://www.wipo.int
• Kania, E. B., & Costello, J. (2020). China’s Strategic Ambiguity and the
Risks of AI Weapons. Brookings Institution.

• National Security Commission on Artificial Intelligence. (2021). Final


Report. Schmidt, E. (2020).
• Testimony before the Senate Armed Services Committee on
Emerging Technologies and Their Impact on National Security.
• U.S. Department of Defense. (2020). Ethical Principles for AI.
• Department of Defense. (2023). Annual Report on Military and Security.
• Developments Involving the People's Republic of China
• Department of Defense. (2023). Annual Report on Military and Security
Developments Involving the PRC.

87
Submission ID trn:oid:::16158:98298642

Raj 6thSem
Ai_in_military_strategy_of_usa_and_china[1] (1).docx

Document Details

Submission ID trn:oid:::16158:98298642
81 Pages
Submission Date

May 29, 2025, 10:43 AM GMT+5:30 21,674 Words

Download Date 125,543 Characters

May 29, 2025, 10:44 AM GMT+5:30

File Name
Ai_in_military_strategy_of_usa_and_china[1] (1).docx

File Size
180.3 KB

Page 1 of 89 - Cover Page

6% Overall Similarity
The combined total of all matches, including overlapping sources, for each database.

Filtered from the Report


Submission ID trn:oid:::16158:98298642

Integrity Flags
0 Integrity Flags for Review No
suspicious text manipulations
found.
Our system's algorithms look deeply at a document for any inconsistencies that would set it apart
from a normal submission. If we notice something strange, we flag it for you to review.
A Flag is not necessarily an indicator of a problem. However, we'd recommend you focus your
attention there for further review.
The sources with the highest number of matches within the submission. Overlapping sources will not be display.

You might also like