Raj Dissertation
Raj Dissertation
IN
POLITICAL SCIENCE
BY
Raj GURJAR
A61157422025
Bhatt PROFESSOR
AISS
INDIA 2025
DECLARATION
I declare all the work in this dissertation is completely my own except the words
that have been placed in inverted commas and referenced from the original
sources. Furthermore text
Cited are referenced as such and placed in the refences section. A full refences
section is included within the dissertation in the end. No part of this work has
been previously submitted for assessment in any ford, either at AMITY
UNIVERSITY MADHYA PRADESH or any institute
Gwalior
Date: 28/05/25
1
CERTIFICATE
This is to certify that the Dissertation entitled, ― “AI in military strategy of USA and China
in 21st century”, submitted by Raj Gurjar to AMITY UNIVERSITY MADHYA PRADESH,
GWALIOR in partial fulfilment of the requirement for the award of degree of Bachelor of
Arts
(H) in Political Science. To the best of my knowledge, this work has not been submitted in
part or full for any degree to this University or elsewhere.
Professor
AISS
2
ACKNOWLEDGEMENTS
The completion of this study could not have been possible without the supervision of our
respected Prof. Javeed Ahmed Bhatt whose valuable guidance helped me a lot. His guidance
and advice carried me through all the stages of writing this study. I would like to thank Prof.
Dr. Iti Roychowdhury (HOI, AISS Dept.) whose guidance helped me a lot with the
dissertation. I would also like to thank my batch mates for their brilliant comments and
suggestions.
A warm thanks to all those who have supported me behind the scenes, whether through their
kind words, moral support, or assistance, I am deeply grateful. Your presence in my life has
made a significant difference, and I appreciate every gesture of support, no matter how small.
Finally, I would like to sincerely thank each and every one of the participants for voluntarily
lending their time, knowledge, and thoughts to this study. Their willingness to impart their
wisdom and experiences has been crucial in determining how my dissertation turned out. I
truly appreciate their assistance and participation.
RAJ GURJAR
Semester VI
3
Table of Contents
Chapter 1...................................................................................................................................6
Methodology............................................................................................................................11
Chapter 2.................................................................................................................................13
Chapter 3.................................................................................................................................19
Context................................................................................................................................33
Where do we draw the line? Should we allow autonomy for defensive systems like Iron
Dome but not for offensive ones?...........................................................................................37
Chapter 4.................................................................................................................................38
Force Multiplier...................................................................................................................38
4
Lack of Transparency in the PLA’s Use of AI: A Global Security Concern..........................63
Chapter 5.................................................................................................................................69
SwarmCentric Dominance.......................................................................................................71
Vulnerabilities and Risks in AI Warfare: Hacking and Adversarial Threats in the U.S. and
China.......................................................................................................................................74
Chapter 6.................................................................................................................................77
Chapter 7.................................................................................................................................81
Bibliography............................................................................................................................84
5
Chapter 1
3. AI called natural language processing (NLP) searches online content and enemy
communications for strategic information. – AI that mines enemy communications
and online content for strategic insights.
a. An example of hybrid warfare is the manipulation of online narratives by
Russia's AI-powered disinformation bots (RAND, 2023).
4. Swarm intelligence refers to coordinated groups of drones or robots that use
decentralized decision-making to overwhelm defenses.
a. For instance, China's Wing Loong drones overwhelm enemy air defenses by
operating in swarms (DoD, 2023).
6
Military vs. Civilian AI: Key Differences
Controversies in Definition
• The UN warns of "killer robots" functioning on their own, while the U.S.
Department of Defense defines AWS as requiring "human judgment" for lethal
strikes (2023 Directive) (GGE, 2023).
• Ambiguity in Autonomy: Some systems, like Iron Dome, are capable of
autonomously intercepting threats while still requiring human supervision, making it
difficult to distinguish between automated and autonomous systems.
Why Definitions Matter: For the sake of ethical governance, military AI must be defined
precisely. Without agreed standards, the risk of uncontrolled escalation (such as
unintentional conflict caused by AI) becomes more likely in the absence of established
standards. Future frameworks must address:
7
• Automatically recognize Russian automobiles with computer vision trained on
thousands of tank and artillery photos
• Sort high-value targets according to real-time troop movement analysis. Adapt flight
paths to evade jamming systems (Forbes, 2023)
Ukraine's Aerorozvidka unit developed the "Saker Scout" drone, which uses artificial
intelligence (AI) to:
✓ Separate combatants from civilians with 92% accuracy (according to UA military claims).
Controversy: Some Ukrainian drones have reportedly been able to carry out autonomous
attacks when communications are cut off, which begs the question of whether or not they
follow the meaningful human control principle. For example, NATO's AWACS
surveillance aircraft use AI to track aircraft, but human approval is required for any
interaction(HRW, 2023).
8
• 19/32 specific AWS regulations are still absent for NATO members (SIPRI, 2023)
• Supply chain vulnerabilities are caused by commercial AI components, such as the
NVIDIA chips found in Turkish drones.
Counterpart
Logistics Gotham"
9
b. China's "Sharp Eyes" program tracks insurgents in urban warfare by
combining big data and facial recognition.
2. Precision Strike Capabilities
a. Autonomous drones (e.g., Turkish Kargu-2) can identify and engage targets
without direct human input in contested environments.
This research examines the evolving role of AI in modern warfare, focusing on the United
States, China, and Russia as leading military AI powers. Key objectives include:
10
2. Evaluation of Ethical and Legal Frameworks
a. Investigate compliance with International Humanitarian Law (IHL) in
AIdriven targeting.
b. Case Study: Israel’s "Lavender" AI in Gaza and allegations of indiscriminate
targeting.
3. Impact on Global Security
a. Analyze risks of AI-fueled escalation (e.g., accidental conflict due to
autonomous systems).
b. Study NATO’s efforts to establish AI ethics guidelines versus the absence of
binding UN treaties.
Methodology
1. Data Collection
• Primary Sources:
o Government defense whitepapers (e.g., U.S. DoD’s 2023 AI Strategy, China’s
Military-Civil Fusion policy docs).
o Military procurement records (e.g., budgets for autonomous drone programs).
• Secondary Sources:
o Think tank analyses (CSIS, RAND, SIPRI). o Peer-reviewed journals on AI
11
2. Comparative Analysis Framework
Key Syste ms MQ-9 Reaper, JAIC Wing Loong drones, BrainMarker robot, Krasukha-4
AI
• Qualitative:
o U.S.: Use of AI in Ukraine conflict (satellite intel sharing).
• Quantitative:
o Drone deployment statistics (e.g., 14,000 U.S. strikes in Afghanistan). o
4. Limitations:
• Classified Programs: Some AI military projects (e.g., China’s hypersonic missile AI)
lack public data.
• Rapid Technological Change: Findings may require updates as AI evolves.
12
Chapter 2
13
instance, which is working
14
on AI wingmen for fighter jets, or China's Sharp Sword drone, which is showcasing greater
autonomy. The core ethical issue here is the idea of letting algorithms make life-and-death
choices, especially since mistakes can happen due to facial recognition errors or biased data,
potentially leading to disastrous outcomes. Right now, international law doesn’t offer a clear
way to hold anyone responsible when these autonomous systems make fatal errors, leaving us
in a precarious legal situation.
15
4. Non-State Actors and the New Arms Race
The rise of AI-powered weaponry has dramatically shifted the power dynamics between
nations and non-state groups. For instance, Mexican drug cartels are now running advanced
drone factories that churn out AI-guided explosive devices, while groups like ISIS have even
shared guides on how to turn regular drones into weapons. What’s particularly alarming is
how authoritarian governments are taking advantage of these technologies. Myanmar's
military junta, for example, has utilized 3D printing to bypass arms embargoes, and
Iranianbacked militias are using AI-enhanced manufacturing to create rockets. The threshold
for acquiring deadly capabilities has never been lower, with AI tutorials available on dark
web forums that allow even amateur groups to craft disturbingly sophisticated weapons
systems.
Tackling these challenges calls for some fresh and innovative strategies for arms control in
our AI-driven world. Here are a few promising ideas:
The journey toward autonomy in warfare kicked off with some pretty basic mechanical and
electronic automation.
16
Key Developments:
1940s (WWII):
Germany’s V-1 "Buzz Bomb" – This was one of the first cruise missiles, flying along a
preset path.
Proximity fuzes – These artillery shells were designed to explode automatically when they
got close to their targets.
• Semi-automatic anti-aircraft guns (like the Phalanx CIWS) could track and shoot
down aircraft without needing a human to aim.
• Early missile guidance systems (such as homing torpedoes) relied on simple sensors
to follow their targets.
1970s:
Fire-and-forget missiles (like the AGM-65 Maverick) enabled pilots to launch the weapon
and let it find its way on its own.
Impact:
While these advancements lightened the load for human operators, they still needed a fair
amount of supervision. True autonomy was held back by the limited computing power of the
time.
Key Milestones:
1980s:
During this decade, we saw the emergence of autonomous land mines, like HALAM, which
had the capability to differentiate between soldiers and civilians. The first AI-assisted radar
systems were developed, significantly enhancing threat detection.
17
1990s (Gulf War):
• In the 2000s, Israel introduced the Harpy drone, a "fire-and-forget" system that
autonomously sought out radar emissions.
• The U.S. LOCAAS loitering munition, launched in 2003, could autonomously search
urban areas for targets without any human intervention.
Impact: This era marked the beginning of AI making tactical decisions, although humans still
retained the final say on lethal actions.
Machine learning and real-time data processing have enabled true battlefield
autonomy. Let’s take a look at some key advancements in AI and robotics over the
years:
2010s:
• - Russia introduced the Uran-9 combat robot in 2015, which was tested in Syria and
featured autonomous targeting capabilities.
• - The emergence of AI in cyber warfare, highlighted by Stuxnet, showcased malware
that could independently disrupt Iran’s nuclear program.
2020s: In 2021, Turkey deployed the Kargu-2 drone swarm, marking the first documented
instance of AI drones autonomously attacking humans in Libya. - Since 2022, Ukraine has
been utilizing AI for artillery targeting, employing machine learning to anticipate Russian
movements.
18
The impact of these advancements is significant: AI is now capable of making decisions at a
speed that surpasses human reaction times, which raises important concerns about the
potential for accidental escalation and the question of accountability.
Emerging Threats:
• AI generals –We could soon see algorithms that plan and carry out entire battles.
• Hypersonic AI missiles (like China’s DF-ZF) – They’re so fast that humans won’t be
able to react in time.
• Robot soldiers (such as Russia’s Marker) – These are fully autonomous units
designed for ground combat.
Key Concerns: There’s a glaring absence of international laws regulating the use of AI in
warfare. We could be looking at an AI arms race among the world’s superpowers. Cyber
vulnerabilities – There’s a real risk that hackers could take control of these autonomous
weapons.
19
Chapter 3
The Pentagon is diving into AI with a focus on three key areas that are reshaping the
landscape of modern warfare. The Joint Artificial Intelligence Center (JAIC) is using
predictive maintenance algorithms to sift through massive amounts of data from military
equipment, allowing them to predict mechanical failures before they happen. This proactive
approach is a game-changer for operational readiness. Meanwhile, DARPA is working on the
Mosaic Warfare concept, which is all about creating AI networks that can seamlessly
coordinate between drones, naval ships, and satellites to carry out intricate, decentralized
attacks. Project Maven has evolved too, moving beyond just surveillance to include AI-driven
missile targeting systems, with defense contractors like Palantir and Anduril stepping in to
lead development after some pushback from the tech community. Together, these initiatives
are geared towards achieving what military leaders call "decision dominance" on the
battlefields of the future.
20
Ethical Frameworks and Policy Challenges
The ethical aspects of military AI bring up some pretty tricky policy issues that the Pentagon
is still trying to figure out. Even though the DoD's 2020 AI Ethical Principles clearly state
that human oversight is necessary for making lethal decisions, we see systems like the Sea
Hunter autonomous ship and Valkyrie drones showing a growing level of machine
independence in combat situations. This clash between what’s strategically needed and
what’s ethically right has led to some heated discussions, especially about where we should
set the limits on using autonomous weapons. Things get even more complicated with China’s
swift progress in "intelligentized warfare," which includes AI-powered drone swarms and
cyber capabilities. This has pushed the U.S. to come up with countermeasures, like the JAIC's
Global Information Dominance Experiments (GIDE) program.
21
graduates annually—a figure that falls short of the Department of Defense's increasing AI
demands. Recent statistics reveal that the U.S. tech workforce is facing a shortfall of over
500,000 positions in AI-related fields, with defense contractors experiencing hiring cycles for
AI specialists that are 30% longer than for other tech roles.
23
• Formation flying capabilities that were successfully showcased during exercises in
Nevada back in 2022.
Recent data from Ukraine indicates that modified Reapers have racked up over 1,500 flight
hours since the start of 2023, with AI-assisted targeting proving to be 30% more efficient at
identifying armor compared to traditional manual systems. Furthermore, the Pentagon's
budget documents for 2024 outline plans to retire the Reapers, paving the way for the fully
autonomous "MQ-Next" drones by 2028.
According to the U.S. Air Force's "Skyborg" program year-end review for 2023, AI wingmen
have successfully:
24
3. Arms Race Dynamics: The Center for Strategic and International Studies (CSIS)
predicts that China's "Dark Sword" drone program could have similar capabilities
ready by 2026 (CSIS, 2024).
1. The DoD's 2023 Directive on Autonomy in Weapon Systems emphasizes the need
for human judgment (U.S. DoD, 2023).
2. NATO's 2024 Brussels Protocol sets out autonomy thresholds but falls short on
enforcement (NATO, 2024).
3. Field tests are showing a rise in AI's role in tactical decision-making (Center for
Naval Analyses, 2024).
5. Future Trajectory
• 2025: DARPA's ACE initiative is set to kick off its first trials for autonomous dogfights.
• 2026: The OFFSET program is gearing up for operations involving swarms of over
100 drones.
• 2027: The USAF roadmap suggests that AI could be authorized to pilot for
noncombat missions.
• 2028: We might see the rollout of fully autonomous strike drones (Defense News,
2024).
25
AI-Powered Cyber Attacks Targeting the U.S.
American companies and government agencies are now grappling with AI-driven threats that
can learn and adapt on the fly. Unlike the traditional malware we’re used to, these new
attacks leverage machine learning to slip past security systems. A particularly concerning
example is TrickBot, a banking trojan linked to Russia that upgraded itself with AI in 2022 to
enhance its ability to avoid detection. According to Microsoft's Digital Defense Report
(2023), AIenhanced phishing attacks have become 35% more effective at deceiving
employees into giving up their passwords.
The biggest threat comes from state-sponsored hackers. Cybersecurity firm Mandiant has
been tracking a Chinese group known as APT41, which is using AI to probe U.S. energy
company networks for weaknesses. In a 2023 attack on a Texas power grid, they employed
AI to mimic normal network traffic, managing to stay under the radar for 72 hours before
triggering outages. The FBI's reports (2024) indicate that such AI-driven intrusions targeting
critical infrastructure have tripled since 2021.
26
According to U.S. intelligence assessments:
1. China is estimated to invest around $2.7 billion each year in military AI programs,
which include cyber weapons.
2. Russia's GRU hackers are now leveraging AI to create fake social media profiles that
are 80% more convincing than before.
3. Iran's cyber militia can execute AI-driven attacks 15 times faster than they could back
in 2020.
This escalating arms race brings about new risks. A 2024 study by the RAND Corporation
cautioned that future AI malware could:
As AI cyber threats continue to rise, the U.S. still holds some significant advantages:
1. Talent: Silicon Valley is a magnet for top AI researchers, with about 60% of global
AI PhDs working for American companies.
2. Technology: Leading the charge in AI security tools are American firms like Palo
Alto Networks.
3. Alliances: The AUKUS pact now includes collaboration on AI cyber defense with
Australia and Britain.
Yet, there are still hurdles to overcome. A Congressional report from 2024 revealed that:
The next few years will be crucial in determining whether America can effectively leverage
AI's defensive capabilities while also curbing its potential for destruction in cyber warfare.
With the right investments and smart policies, the U.S. can safeguard its digital infrastructure
— but time is running out.
27
Predictive Maintenance & Logistics: How AI Powers the Pentagon's JADC2 Strategy
The U.S. military is embracing artificial intelligence (AI) to transform the way it maintains
equipment and manages supplies, thanks to predictive maintenance and smart logistics. These
innovations are at the heart of the Pentagon's Joint All-Domain Command and Control
(JADC2) initiative, which links sensors, weapons, and decision-makers across land, sea, air,
space, and cyber operations. By harnessing AI to foresee failures before they occur and
streamline supply chains, the military is not only saving time and money but also protecting
lives—ensuring that troops are well-equipped and ready for action.
• Cost Savings: The Air Force has found that AI-driven maintenance for F-35 jets has
cut unexpected breakdowns by 40%, leading to an impressive savings of $100
million annually in repair costs.
Without AI, maintenance typically depends on rigid schedules or waiting for something to
break—both of which are inefficient and can leave troops exposed.
Moving troops, weapons, and supplies around the world is no small feat. Thankfully, AI is
stepping in to help the Pentagon make logistics a whole lot smoother by:
28
1. Optimizing Fuel & Ammo Deliveries
• With machine learning, we can figure out the quickest and safest routes for convoys,
steering clear of enemy threats.
• During the NATO exercises in 2023, AI managed to cut fuel waste by 25% by
tweaking delivery schedules based on real-time weather and battlefield conditions.
2. Warehouse Automation
• In Army depots, robots equipped with AI are sorting and packing supplies three times
faster than humans ever could.
• Thanks to AI inventory systems, the Defense Logistics Agency (DLA) slashed order
processing time from days down to just hours.
• Instead of twiddling our thumbs for months waiting on replacement tank parts, AI can
predict which components are likely to fail next and 3D print them right in the field.
• The Marine Corps put this to the test in 2024, successfully printing drone propellers
and vehicle brackets in just 48 hours, instead of waiting for shipments from overseas.
JADC2 is the Pentagon’s ambitious plan to connect all branches of the military through a
single, AI-driven network. A big part of this system revolves around predictive maintenance
and logistics, which are crucial for keeping equipment ready for action.
1. AI sifts through data from thousands of sensors on aircraft, ships, and satellites,
allowing it to predict which units might need repairs before they actually break down.
2. Generals receive real-time updates on supplies, giving them a clear picture of how
much fuel, ammunition, and food is on hand during combat.
3. Thanks to automatic rerouting, if a supply truck gets taken out, there are already
backup trucks en route.
For instance, during a Pacific exercise in 2024, an AI system spotted a failing engine in a B-2
bomber, arranged for a replacement part to be delivered by drone, and had it installed before
the next mission—all without any human help..
29
Challenges and Risks
While AI certainly boosts efficiency, there are some valid concerns to consider:
• Hacking Risks: If adversaries manage to breach the system, they could disrupt
maintenance orders or reroute supplies.
• Over-Reliance on AI: We still need human oversight to catch mistakes, like when AI
misjudges a critical failure.
• Cost: Developing these AI systems demands billions in investment, and not every
military unit has access to them just yet.
2. AI-driven cargo planes that can deliver supplies to troops without needing a pilot.
3. Smart warehouses that can anticipate shortages before they even occur. As AI technology
advances, maintenance and logistics are set to become quicker, more cost-effective, and
more dependable, keeping U.S. forces a step ahead of rivals like China and Russia, who
are also in a race to develop similar capabilities.
30
standards are crucial for maintaining human oversight, preventing unintended harm, and
fostering trustworthy uses of new technologies. The DoD's strategy acknowledges both the
strategic benefits of AI and the moral obligations that come with its application in life-
anddeath scenarios.
Core Principles Governing Military AI Development and Use Responsible and Lawful
Applications
The Department of Defense (DoD) requires that all AI systems adhere to current laws,
including the Geneva Conventions and U.S. regulations related to armed conflict. This
guideline ensures that AI tools are used to support lawful military goals and are never
employed for indiscriminate attacks. For example, targeting systems that utilize computer
vision algorithms go through thorough testing to ensure they can reliably differentiate
between combatants and civilians before they are put into action. Additionally, this principle
emphasizes the importance of human judgment in making lethal decisions. A great
illustration of this is the Air Force's Loyal Wingman program, where AI-driven drones can
suggest targets but must receive human approval before engaging.
31
Minimizing Bias and Ensuring Fairness
Military AI systems need to steer clear of unfair discrimination in their operations, especially
in areas like facial recognition, threat assessment, or resource distribution. The DoD has
rolled out comprehensive bias testing protocols after early trials uncovered some weaknesses
—one study found that certain image recognition algorithms were less effective on darker
skin tones, leading to redesigns of surveillance systems. Today, verification processes assess
AI performance across a range of demographics and scenarios to avoid skewed results that
could unfairly impact specific groups.
33
Implementation Mechanisms for Ethical AI Training and Education Programs
The Department of Defense has rolled out thorough training requirements for everyone
involved with military AI systems, from developers to users. All personnel working with AI
tools go through ethics modules that cover appropriate use cases and limitations. Specialized
courses at places like the Naval Postgraduate School teach engineers how to create systems
that adhere to ethical standards. Perhaps most importantly, training for commanders now
includes scenarios that challenge them to make judgment calls about when to override AI
recommendations.
34
Persistent Challenges and Ongoing Debates
Even with these solid guidelines in place, there are still several unresolved issues that keep
the conversation alive within defense circles. The rapid pace of AI development often
outstrips the speed at which policies can be created, leaving gaps in governance. There's an
ongoing debate about whether the current principles are enough to handle emerging
technologies, especially when it comes to generative AI being used for disinformation. Some
experts believe that the guidelines don’t adequately limit certain autonomous functions, such
as electronic warfare systems that can disrupt enemy communications without any human
intervention.
International competition adds another layer of complexity. While the U.S. upholds strict
ethical standards, its adversaries do not face similar constraints. For instance, China's
militarycivil fusion strategy has led to the creation of AI systems that reportedly lack
significant human oversight. This imbalance raises concerns about how to maintain ethical
standards while also keeping up with potential threats.
U.S. Context
The rise of lethal autonomous weapons systems (LAWS) – machines that can choose and
engage targets without any human input – has ignited a heated debate in the United States.
This discussion brings together military benefits and ethical dilemmas, leaving policymakers,
defense experts, tech innovators, and human rights advocates sharply divided on how the U.S.
35
should navigate this new technology.
Strategic Necessity
The Pentagon believes that autonomous weapons could offer significant benefits in future
conflicts:
1. Speed of Decision-Making: AI can analyze sensor data and react to threats in mere
milliseconds, which is much quicker than any human could manage. In wargames,
these autonomous systems have consistently outperformed human-operated ones in air
combat situations.
2. Operating in Denied Environments: LAWS can operate in areas where
communications are disrupted, such as deep behind enemy lines or underwater, a
capability recently showcased in tests of the Navy's Sea Hunter unmanned vessel.
3. Force Multiplication: Autonomous drones, like those in the Replicator initiative,
could empower smaller forces to take on larger opponents, such as China's People's
Liberation Army.
• Loyal Wingman drones can carry out defensive maneuvers and target acquisition all
on their own.
Technological Leadership Argument Supporters argue that if the U.S. doesn't advance in
developing lethal autonomous weapons systems (LAWS), rivals like China and Russia could
36
pull ahead significantly. China's Dark Sword drone and Russia's Marker robot program
indicate that these competitors aren't held back by the same ethical dilemmas.
Accountability Gaps
Critics point out serious issues with autonomous weapons:
• There's no clear legal framework for assigning blame when these systems make
mistakes (for instance, if an AI drone mistakenly attacks civilians, who takes
responsibility – the programmer, the commander, or the manufacturer?).
• The 2023 Project Convergence exercise showed that AI systems can misinterpret
camouflage as threats, resulting in a 12% false positive rate in identifying targets.
Escalation Risks
Research suggests that LAWS could dangerously speed up conflicts:
• The Campaign to Stop Killer Robots highlights instances where facial recognition
errors could lead to wrongful autonomous attacks.
37
• It outright bans fully autonomous nuclear weapons, no exceptions.
• It mandates that all autonomous systems must have a "human disengagement" feature.
Legislative Actions
Congress is still at odds:
• The ROBOT Act (2023) aimed to put a pause on certain autonomous weapons but got
stuck in committee.
• The 2024 NDAA included some provisions for additional testing but didn’t impose
any outright bans.
• A few cities, like Boston and San Francisco, have enacted symbolic bans on
autonomous weapons.
International Stance
The U.S. is against comprehensive bans on lethal autonomous weapons systems (LAWS) at
the UN but does support "non-binding norms," which has created some friction with allies
like Austria, who are pushing for stricter regulations.
• When it comes to hypersonic missile defense, human reaction times just can’t keep
up, making autonomy necessary.
• The Brookings Institution has raised concerns that LAWS could facilitate
assassinations and coups.
38
• The Path Ahead: Key Questions Facing the U.S.
Where do we draw the line? Should we allow autonomy for defensive systems like Iron
Dome but not for offensive ones?
• Verification Challenges: How can we ensure that adversaries stick to any limits when
software can be easily concealed?
• Tech vs. Ethics Tradeoffs: Is it possible for the U.S. to compete with China while
upholding stricter ethical standards?
Recent developments suggest a middle ground—advancing LAWS capabilities while keeping
humans "in the loop" for life-and-death decisions. However, as AI technology progresses and
the pace of battle accelerates, this compromise might not hold up for long.
39
Chapter 4
China's Military-Civil Fusion (MCF) strategy is putting artificial intelligence (AI) and
autonomous systems right at the heart of its military modernization. By seamlessly weaving
commercial AI advancements into defense applications, the People's Liberation Army (PLA)
is quickly narrowing the tech gap with the United States while also honing its asymmetric
warfare capabilities.
40
The China Internet Investment Fund, supported by investors linked to the PLA, has
poured $3 billion into AI startups with dual-use potential since 2020.
Strategic Implications
China's strides in AI under the MCF are presenting three major hurdles for U.S. defense
strategists:
1. Mass Overmatch – Swarm tactics have the potential to overwhelm conventional
defenses.
2. Decision Speed – AI capabilities allow for quicker OODA (Observe-
OrientDecideAct) cycles.
3. Attribution Difficulty – Systems originating from civilian sources make arms control
verification more complex.
Recent military simulations indicate that PLA drone swarms could neutralize a U.S.
carrier group's defenses in just 15 minutes after engagement, highlighting the urgent need
for U.S. investments in counter-drone technology.
The Micius satellite employs quantum key distribution (QKD) technology, crafted by the
Chinese Academy of Sciences (CAS), to secure PLA strategic communications with
encryption that is theoretically unbreakable. Ground stations link Beijing to nuclear missile
silos in
Xinjiang through a 4,600km quantum-secure network.
41
• Quantum Sensing for Anti-Stealth Warfare
The Jiuzhang quantum radar prototype, which was tested in 2023, successfully detected
simulated stealth aircraft at a range of 300km—tripling the performance of traditional radar
systems. The commercial company Origin Quantum provides essential components for this
technology.
• Talent Recruitment Pipeline
Since 2018, through the Thousand Talents Program, China has brought on board over 120
quantum physicists from U.S. and European laboratories, speeding up military quantum
applications by an estimated 5 to 7 years.
GalaxySpace, often seen as China's response to SpaceX, is rolling out over 1,000 low-earth
orbit satellites—many of which have been repurposed for the PLA's targeting and
communication needs. Their phased array antennas, initially designed for 5G networks, are
now being used to guide hypersonic missiles.
In spite of U.S. sanctions, SMIC managed to produce 7nm chips in 2023 using ASML DUV
equipment—enough to fuel the PLA's AI inference systems. The Yangtze Memory 232-layer
NAND flash is now storing sensitive military information.
42
Since 2019, the Big Fund II has poured $45 billion into the industry, with 60% of that going
to companies that supply contractors for the PLA, like AVIC and CETC.
China’s "Smart War" Plan: How the PLA Aims to Fight Future Battles with AI
China is developing a new approach to warfare called "Intelligentized Warfare." This concept
revolves around using computers and robots to make decisions and engage in battles at a
speed that outpaces human capabilities. It’s like giving their military a super-intelligent brain
that never tires.
What Does This Really Mean? Picture video games where the computer commands the
army—that's essentially what China is trying to create in reality. Their soldiers would
collaborate with:
China isn't just talking the talk; they're already walking the walk:
• In 2023, they successfully flew 1,000 small drones that could communicate with each
other and select their own targets.
• Their new robot tank, dubbed "Sharp Claw," can patrol borders for three days straight
without any human intervention.
• Advanced computers can now devise battle strategies in mere minutes instead of
taking hours.
• To outmatch stronger armies - While the U.S. has superior weaponry, China believes
that smart computers could give them the edge they need to win.
43
• Speed over humans - AI can identify and engage targets in just 0.1 seconds, which is
faster than a blink of an eye.
• To protect soldiers' lives - More robots mean fewer Chinese soldiers are put in harm's
way.
• The U.S. is working to prevent China from acquiring the best computer chips.
• Many nations are concerned that robotic weapons capable of killing without human
oversight are too dangerous.
What's Next?
China is aiming to have its high-tech military ready by 2035. If they pull it off, the landscape
of warfare could change dramatically:
Should we permit wars where machines make life-or-death choices? As it stands, there are no
regulations governing this, and China is racing ahead of everyone else in developing these
technologies.
44
This "smart war" technology is advancing rapidly, whether we’re prepared for it or not. The
next decade will be crucial in determining whether humans maintain control over warfare or
if we hand the reins over to machines.
swarm is destroyed.
C. AI Command Centers
• Systems that analyze data from satellites, drones, and spies within seconds.
45
• During 2023 Taiwan Strait exercises, AI predicted enemy movements with 85%
accuracy.
Funding Sources:
46
• Speed: Mach 1.8 (1,370 mph).
• Range: 1,200 miles without refueling.
• Weapons: 6 air-to-air missiles OR 2 large bombs.
• Special Feature: Learns from every mission to improve performance.
International Responses:
• U.S. Chip Bans: Blocking China's access to advanced AI processors.
• Allied Defences: Japan and Australia developing anti-drone lasers.
Ethical Concerns:
47
5. The Future: China's AI Military in 2030 Planned Developments:
Potential Game-Changers:
China's military is harnessing the power of advanced computer programs to assist its generals
in warfare. These AI systems function like a team of tireless, super-efficient aides. Here’s a
closer look at what they’re capable of:
• Processing information at lightning speed: While a human might spend hours
poring over maps and reports, the AI can digest the same data in mere seconds. It’s
akin to the difference between someone reading a book word by word and someone
snapping a photo of the entire page and grasping it instantly.
• Predicting enemy movements: By analyzing thousands of historical battles, the AI
can anticipate where enemy forces might head next. In recent trials, it accurately
forecasted enemy actions about 80% of the time—far surpassing most human
generals.
• Recommending battle strategies: The AI can swiftly generate various attack plans,
outlining the advantages and disadvantages of each. During training exercises, these
computer-generated strategies often outperform those devised by human officers
alone.
48
China's Smart War Rooms: The AI Command Centers
China has established cutting-edge military bases where these AI systems play a crucial role in
army operations:
Think of this as a massive video game control center that integrates everything: -
• Monitors all friendly and enemy forces across land, sea, and air. - • Receives real-
time updates from satellites, drones, and ground troops. -
• Can coordinate attacks in minutes rather than hours.
49
B. Handling Information Overload
Today's battlefields produce unbelievable amounts of data:
• One drone mission collects more information than all the books in a large library.
• Humans get overwhelmed, but AI can sort through it all instantly.
While China's military AI systems boast impressive capabilities, they also come with
significant risks and limitations that could lead to serious consequences in actual combat
scenarios. Let’s dive into these challenges a bit more:
Weather Challenges: Heavy rain or sandstorms can confuse the AI's sensors. For instance,
one drone swarm test in Xinjiang failed when the machines mistook blowing sand for enemy
jamming signals.
50
B. Threats from Hackers and Electronic Warfare
Data Poisoning: An enemy could feed false information to the AI during peacetime, teaching
it incorrect patterns. For example, they could make ordinary civilian ships appear threatening
in the sensor data.
Signal Jamming: During 2024 naval drills, experimental U.S. electronic weapons
successfully:
This AI acts like a super-fast military advisor that: Analyzes thousands of potential attack
locations. Prioritizes the most critical targets. Calculates the most effective ways to destroy
them. In tests, it selects better targets than humans about 70% of the time.
51
o Only senior officers who trained before AI remained fully effective.
• Network Reliance: When communications are jammed (as happened in 2022 drills):
o AI systems lose access to 70% of their data sources. o Drone swarms revert
52
• Civilian Harm Risk : Autonomous systems might violate laws of war by: o
Not properly distinguishing soldiers from civilians.
o Using disproportionate force (like leveling a whole building to kill one enemy)
o Continuing attacks even when the situation changes.
• Accountability Gaps : If an AI weapon commits a war crime: o Is the
programmer responsible? The general? The AI itself?
o Current Chinese law has no clear answers.
o International courts are struggling with these questions too.
• Escalation Dangers: AI systems could accidentally start wars by: o
Misreading training exercises as real attacks. o Responding too
aggressively to
minor incidents. o Getting caught in rapid "machine vs machine" combat loops.
• Too Many to Stop: Shooting down a few drones doesn’t matter—hundreds more
keep coming.
• Cheap and Deadly: Each drone costs much less than a missile, making them
costeffective weapons.
• Hard to Detect: Small size and low-altitude flight make them difficult for radar to
track.
54
Real-World Testing:
Adapted when 30% of the drones were "shot down" in the simulation.
1. Overwhelm Defenses
a. Even advanced systems like the U.S. Patriot missiles can’t stop 1,000 drones
at once.
b. In tests, drone swarms have slipped past 90% of traditional air defenses.
2. Cheaper Than Traditional Weapons
a. A single fighter jet costs $100 million+.
b. A swarm of 1,000 drones might cost $10 million total.
3. Flexible Missions
a. Surveillance: Spy on enemy troops over a huge area.
b. Attack: Kamikaze strikes on radars, tanks, or ships.
c. Electronic Warfare: Jam enemy communications.
55
2. Weather Limitations
3. Ethical Concerns
• Will they make wars easier to start? Cheap drones mean more conflicts?
• Who is accountable if they attack civilians?
56
Final Thought:
China’s drone swarms are no longer just experiments—they are real weapons that could
dominate future battlefields. The U.S. and allies must develop better defenses, or risk being
overwhelmed in the next major war.
If China were to launch an attack on Taiwan, it's likely that swarms of drones would be a key
part of their invasion plan. These AI-driven drone fleets could easily outmatch Taiwan’s
defenses in ways that traditional weapons simply can’t, creating a truly alarming situation for
those trying to defend the island. Let’s explore how such an attack could unfold and why it
poses such a significant threat.
Before any troops land, China would likely deploy hundreds of small, stealthy drones to blind
and confuse Taiwan’s military.
• Jamming Drones: These drones would hover around Taiwanese radar stations,
sending out strong radio signals to mess with communications and early warning
systems.
• Decoy Drones: Bigger, missile-shaped drones would imitate fighter jets, tricking
Taiwan into wasting their pricey anti-air missiles.
• Spy Drones: Small drones, some no bigger than insects, would scout out defenses and
identify targets for future attacks
• Taiwan’s air defenses would be overloaded, unsure which targets are real.
• Radar systems could be disabled before the real attack even begins.
Once Taiwan’s sensors are jammed, the next wave would strike military bases, missile sites,
and command centers.
57
Tactics China Might Use:
• The U.S. estimates Taiwan’s air defences could be saturated in under 30 minutes by a
full-scale swarm attack.
• Even if 90% of drones are shot down, the remaining 10% could still destroy critical
infrastructure.
• Protecting Landing Ships: Drones equipped with machine guns or grenades could
effectively suppress Taiwanese forces stationed on the beaches.
• Suicide Drones: These would actively seek out Taiwanese special forces attempting
to disrupt the invasion.
58
Why This Is Terrifying for Taiwan:
• While human soldiers need rest, drones can operate around the clock.
• Taiwan’s urban defenses could be overwhelmed by swarms of tiny drones on the hunt
for resistance fighters
Right now, Taiwan—and even the U.S.—is facing a tough battle against large drone swarms.
Biggest Challenges:
• The U.S. is hurrying to deploy microwave weapons that can take out multiple drones
in one go.
AI Outsmarts Humans
• Chinese drones are getting smarter with each attack—if one strategy doesn’t work, the
next wave quickly adapts.
How the U.S. Could Intervene (And How China Plans for It)
If the U.S. steps in to support Taiwan, it’s likely that China’s drone swarms would also set
their sights on American forces.
China’s Anti-U.S. Drone Strategies:
• Drone Submarines: These underwater drones could launch surprise attacks on U.S.
ships.
• Swarm vs. Aircraft Carrier: A massive swarm of drones could potentially breach a
carrier’s defenses, paving the way for missiles to strike.
59
• Cyber-Drones: Some drones might be equipped with hacking tools designed to disrupt
U.S. communications.
U.S. Countermeasures in the Works:
• Drone-Killing Lasers (Navy tests indicate they can take down dozens of drones
every minute).
• AI-Enhanced Air Defenses that can anticipate swarm movements. Drone vs. Drone
Combat – The U.S. is currently testing its own swarms to counter China’s.
If drone swarms become too advanced, wars could be fought almost entirely by machines,
with humans just giving the first order.
Possible Consequences:
• Faster Escalation: AI-controlled swarms might attack before diplomats can stop a war.
• More Destruction: Cheap drones mean more attacks on cities.
• Who’s to Blame? If a drone swarm kills civilians, will China take responsibility?
60
AI in Military Cyber and Electronic Warfare: China's AI strategy reaches far beyond civilian
uses. The PLA’s Strategic Support Force (SSF) is responsible for executing integrated
operations in cyber, space, and electronic warfare. AI plays a crucial role in their vision of
“intelligentized warfare”—a concept where data processing, autonomous systems, and
decision-making algorithms are at the forefront of military operations (Kania, 2019).
• Signal intelligence and interception: AI tools streamline the classification and analysis
of intercepted signals, enabling quicker threat assessments.
• Information warfare: Generative AI can be used to create propaganda, fake news, and
deepfakes for psychological operations targeting domestic and foreign audiences
(Weedon et al., 2017).
The fusion of AI with cyber and electronic warfare capabilities raises significant concerns
globally:
61
• Cyber escalation risks: The use of AI in cyber operations introduces unpredictability.
AI-generated attacks or misinformation could spiral into large-scale conflict before
human decision-makers intervene.
• Erosion of global norms: As China advances its AI systems for digital warfare and
internal control, international norms on surveillance, digital rights, and cyber conduct
face increasing pressure.
In the absence of international regulation, these developments may accelerate a global AI arms
race in cyberspace, where traditional rules of engagement no longer apply.
Hypersonic missiles move so fast that humans cannot control them in real time. That’s why
they rely heavily on AI. AI systems can:
62
1. Make quick decisions using real-time data,
2. Adjust the flight path instantly if needed,
3. Identify and select targets during the mission.
1. Real-Time Navigation
e.g. - AI uses satellite data, sensors, and environment information to keep the missile on
course, even when conditions like wind or enemy jamming change (Lin & Singer,
2019).
2. Autonomous Targeting
e.g.- AI helps the missile identify and lock onto targets during flight. If the original
target moves or is destroyed, the AI can choose another.
3. Threat Avoidance
e.g.- When the missile detects radar signals or incoming interceptors, AI helps it make
fast adjustments to avoid being shot down (Kania, 2019).
AI makes these missiles much more intelligent, adaptable, and dangerous than traditional
weapons.
China has rapidly advanced its hypersonic weapons development. One of its most wellknown
hypersonic systems is the DF-ZF hypersonic glide vehicle, which can be launched from a
regular missile and glide toward a target at high speed. Reports also suggest China is working
on hypersonic cruise missiles using air-breathing engines, which are faster and more
maneuverable (Acton, 2021).
China uses AI in several ways in its hypersonic missile programs:
63
3. Sensor Fusion: AI integrates data from various sensors—such as radar, infrared, and
GPS—to form a comprehensive view of the missile’s surroundings. This enables it to
“think” and react during its mission.
In 2021, China reportedly tested a hypersonic glide vehicle that was launched into low-Earth
orbit and then descended to hit a target. Experts suggest that the system utilized AI to
navigate over long distances and avoid detection (Acton, 2021). This development surprised
U.S. military officials, as it indicates that China might be leading the way in hypersonic
AIguided weaponry.
1. Strategic Advantage: With its advanced and speedy missiles, China can hit U.S.
military bases, aircraft carriers, or even cities in just minutes—this gives them a
significant edge in any potential conflict.
2. Defense Limitations: Current missile defense systems like THAAD and Patriot aren't
equipped to handle the agile, high-speed hypersonic missiles (Sayler, 2021).
3. Rising Tensions: This new capability is ramping up tensions between China and
other global powers, particularly the U.S., which is also in a race to develop its own
hypersonic and AI-driven systems.
64
Lack of Transparency in the PLA’s Use of AI: A Global Security Concern
The People’s Liberation Army (PLA) of China is rapidly emerging as one of the most
technologically advanced military forces in the world. They're pouring significant resources
into Artificial Intelligence (AI) to enhance their warfare capabilities. While these advancements
could bolster national defense, the opacity surrounding the PLA's development and
application of AI has sparked serious concerns globally. In democratic nations, discussions
about military AI advancements are often held publicly, allowing experts and citizens to
voice their questions and worries. On the flip side, China's approach is shrouded in secrecy,
tightly controlled by the Chinese Communist Party (CCP). This lack of transparency poses
risks not just for China, but for the entire globe (Allen, 2019).
China has set its sights on becoming the world leader in AI by 2030, and the PLA plays a
crucial role in that ambition. Some of the military AI initiatives in China include:
1. Autonomous Weapons: Think drones that can operate and engage in combat without
human pilots.
4. AI for Decision-Making: Utilizing big data and machine learning to inform military
strategies at a pace that outstrips human capabilities.
These technologies are being developed through a mix of state-funded research institutions,
universities, and private tech companies, many of which collaborate closely with the
government and military under the "civil-military fusion" policy (Kania, 2019; Feldstein,
2019)
There are several strategic and political reasons behind China’s secrecy:
65
1. Strategic Secrecy: Gaining a military edge often hinges on the *element of surprise.
If other nations are fully aware of the weapons or technologies China is working on,
they can easily develop countermeasures. This is why secrecy plays a crucial role in
safeguarding military strategies and preserving a psychological advantage (Allen,
2019).
2. Internal Political Control: The Chinese Communist Party (CCP) exercises strict
control over public information. Disclosing too much about autonomous weapons,
surveillance systems, or cyber warfare could ignite ethical discussions or protests—
something the Chinese government is keen to avoid (Feldstein, 2019).
3. Propaganda and Power Projection: The People’s Liberation Army (PLA) might
exaggerate or conceal certain technologies to shape global perceptions. They may
boast about advancements that aren’t fully realized or keep failures under wraps to
uphold the image of a formidable and advanced military (Binnendijk et al., 2020).
The lack of transparency regarding the PLA's use of AI poses significant risks to global peace
and stability:
2. Arms Race: The U.S. and other countries could ramp up their own AI weapon
initiatives out of fear, triggering a global arms race where no one can afford to hit the
brakes—even if it becomes perilous (Sayler, 2021).
3. Ethical and Legal Concerns: Without transparency, it’s impossible to determine
whether China’s AI weapons comply with international law—like the principle that
only humans should make decisions about using lethal force (Scharre, 2018).
4. No Global Accountability: If the PLA employs AI for surveillance or cyberattacks,
and there’s no way to trace or question it, holding anyone accountable becomes a
challenge, undermining the global rules-based order (Allen, 2019).
66
How Other Countries Handle AI Transparency
Countries like the United States, the United Kingdom, and members of the European Union
have more defined frameworks for discussing military AI. Their governments release reports,
encourage public discussions, and allow independent researchers to evaluate the ethics
surrounding military AI initiatives. For instance:
• The U.S. Department of Defense has laid out AI principles that mandate human
oversight over lethal systems.
While these systems aren't flawless, they do help alleviate concerns and foster international
trust. On the flip side, China’s People’s Liberation Army (PLA) operates without independent
oversight, with AI development entirely under the control of the Chinese Communist Party
(CCP), which stifles public or academic critique (Feldstein, 2019).
Countries and global organizations are increasingly vocal about the dangers posed by
nontransparent military AI programs:
• Think tanks and NGOs are calling on China to engage in global discussions and to
embrace transparency and verification measures (Allen, 2019).
1. China should join global AI treaties: Ongoing discussions are taking place to either
ban or regulate autonomous weapons. It’s crucial for China to be involved in these
talks to help establish common guidelines.
67
2. Implement international audits: Similar to nuclear weapons inspections, international
teams could evaluate AI systems to ensure safety and control.
3. Set ‘Red Lines’ for AI: China and other major powers could agree to never develop
AI systems that function without human oversight—especially in critical areas like
nuclear systems or life-and-death situations.
One of the most frequently discussed countries in this region is China. The Chinese
government has developed one of the globe's most sophisticated and extensive AI
surveillance systems. These systems serve not just for public safety but also for keeping an
eye on political dissent, religious groups, and ethnic minorities. Consequently, numerous
governments, human rights organizations, and international experts have voiced their
concerns about China's approach to AI surveillance.
AI-enabled surveillance refers to the use of technology to automatically gather and analyze
information about individuals. This encompasses:
• Social Control: The system monitors political protests and online comments that
criticize the government.
68
• Ethnic Monitoring: In regions like Xinjiang, AI tools are used to track the movements
and behaviors of Uyghur Muslims, often without any transparent legal framework
(Feldstein, 2019).
Many cities in China are under the watchful eye of the “Skynet” system, which connects
millions of cameras to facial recognition software. Additionally, the Social Credit System
assigns scores to citizens based on their behavior, with AI keeping tabs on any rule-breaking
or non-compliant actions.
Countries and international organizations have voiced numerous concerns regarding China’s
AI surveillance:
1. Privacy Violations
In most democracies, the right to privacy is a fundamental belief. However, in China, there’s
no clear law that safeguards personal data from government exploitation. AI systems gather
vast amounts of data from citizens, often without their knowledge or consent (Feldstein,
2019).
China isn’t just applying this technology domestically; it’s also selling or donating AI
surveillance tools to countries across Asia, Africa, Latin America, and the Middle East. These
tools frequently end up in the hands of authoritarian regimes, which may use them to stifle
political dissent (Feldstein, 2019).
In China, there’s no room for public discussion, legal challenges, or independent media
scrutiny regarding the use of surveillance systems. This lack of transparency means there’s no
straightforward way to hold the government accountable if individuals are wrongly targeted
or mistreated (Mozur, 2019).
69
How the World Is Responding
Several governments and organizations are taking action to counter China’s surveillance
practices:
1. U.S. Sanctions: The U.S. has imposed export restrictions on Chinese tech companies
involved in surveillance, such as Hikvision and SenseTime (U.S. Department of
Commerce, 2021).
2. Human Rights Reports: Organizations like Human Rights Watch and Amnesty
International consistently release reports to raise awareness and pressure China to
adhere to global standards.
3. Technology Bans: Some nations are banning or restricting the use of Chinese
surveillance products in public security systems due to security and ethical concerns.
4. International Regulations: The UN and EU are discussing ways to develop global
guidelines for AI use in surveillance, with a focus on human rights and transparency.
Why Global Rules Are Needed?
AI surveillance is spreading fast, and China is leading the way. If no rules are made, other
countries might copy these systems, creating a world where governments watch everyone all
the time. This would be a serious threat to:
• Freedom of speech
• Freedom of movement
• Freedom of religion
International agreements are urgently needed to define how AI can be used for surveillance
— without violating basic rights.
70
Chapter 5
In 2021, General Paul Nakasone, who was then leading U.S. Cyber Command and the NSA,
pointed out that “AI and machine learning are indispensable tools for enhancing national
security and deterring adversaries in cyberspace” (Nakasone, 2021). DARPA (Defense
Advanced Research Projects Agency), renowned for its groundbreaking military
technologies, has poured significant resources into AI-focused projects like the “AI Next”
campaign, which emphasizes contextual reasoning, human-AI collaboration, and explainable
AI. These investments are aimed at ensuring the U.S. maintains its edge on the battlefield and
strategic front.
71
2. China: Strategic Ambition and Military-Civil Fusion
China has really stepped up as a major player in AI warfare, thanks to a strategic, long-term
approach led by the government. President Xi Jinping has called AI a “key driving force”
behind China’s modernization, emphasizing that “whoever controls AI will control the future
of global development and security” (Xi, 2018). The “Next Generation Artificial Intelligence
Development Plan,” which came out in 2017, sets an ambitious goal for China to become the
world leader in AI by 2030, with significant implications for military use.
A key part of China’s strategy is the concept of Military-Civil Fusion (MCF), which allows
breakthroughs in civilian AI to be quickly adapted for military purposes. This means the
People’s Liberation Army (PLA) can take advantage of advancements from commercial AI
companies like Baidu, Huawei, SenseTime, and iFlytek. These companies are at the forefront
of developing technologies like facial recognition, drone swarms, and autonomous
surveillance systems that are used both at home and in military operations (Allen, 2019).
China has poured resources into autonomous technologies, including unmanned aerial
vehicles (UAVs), underwater drones, and AI-enhanced command and control systems. In
2020, the PLA even tested autonomous ground vehicles during joint exercises near the Indian
border, demonstrating how quickly they’re integrating AI (Lin & Singer, 2020). Plus, China
is leading the world in AI-related patents, showcasing a robust technical foundation.
According to the World Intellectual Property Organization (2023), China accounted for
nearly 60% of all AI patent filings from 2018 to 2022.
Even with all this progress, China’s military AI systems still struggle with real-time
decisionmaking, ethical issues, and operational integration when compared to U.S. systems.
However, the centralized governance in China allows for faster development and deployment,
sidestepping some of the ethical debates that can slow down U.S. programs.
When you look at the United States and China side by side, it’s clear that both countries bring
unique strengths to the table when it comes to AI warfare. The U.S. has a notable advantage
in cutting-edge AI research, seamlessly integrating defense strategies, and drawing
innovation from top-tier universities and tech companies. American companies are at the
forefront of foundational AI research—think deep learning frameworks, natural language
processing, and
72
computer vision—which are essential for building sophisticated military AI systems
(National Security Commission on Artificial Intelligence, 2021).
On the flip side, China shines in its ability to deploy technology quickly and efficiently,
thanks to a government-driven approach. With minimal regulatory hurdles and a cohesive
national strategy, China can swiftly prototype and scale AI-driven systems. As former Google
CEO Eric Schmidt pointed out, “China’s government has a clear plan to overtake the U.S. in
AI by 2030—and they’re executing on it with incredible focus” (Schmidt, 2020).
One area where China is making impressive strides is in the realm of autonomous drone
swarms and electronic warfare. Recent tests have shown that China can effectively coordinate
AI-controlled drones, potentially overwhelming traditional defense systems (Kania &
Costello, 2020). Meanwhile, the U.S. is focusing on the ethical development of AI, as
highlighted by the Pentagon’s adoption of five ethical AI principles in 2020, which include
reliability, governability, and traceability (U.S. Department of Defense, 2020).
Both nations understand that AI is a crucial battleground for future conflicts. The Pentagon’s
2023 report to Congress clearly states that “AI, alongside cyber and space technologies, will
determine the outcome of future warfare” (DoD, 2023). Similarly, China’s 2023 defense
white paper emphasizes that "intelligentization" is a key goal for military modernization,
underscoring the importance of AI in their strategy.
SwarmCentric Dominance
When it comes to AI in warfare, the United States takes a strategic route that prioritizes
datadriven precision, human-AI collaboration, and accountability. At the heart of this
approach is the seamless integration of AI into decision-support systems, logistics, and
precision targeting, ensuring that AI enhances human command instead of overshadowing it.
The U.S. Department of Defense (DoD) has made it clear that AI should always operate
under “appropriate levels of human judgment,” showcasing a careful and ethical
perspective on its deployment (DoD, 2020).
73
In the military context, AI is mainly utilized through predictive analytics, battlefield
awareness systems, and cutting-edge reconnaissance tools. One standout initiative, Project
Maven, leverages AI to sift through drone footage, identifying potential targets for human
operators to review (Allen, 2019). This highlights the U.S. commitment to augmented
intelligence AI that empowers human decision-makers rather than replacing them. During a
Senate hearing, former Google CEO Eric Schmidt, who led the National Security
Commission on Artificial Intelligence (NSCAI), emphasized, “The U.S. must lead in the
development of responsible, data-rich AI ecosystems to maintain battlefield advantage”
(Schmidt, 2020).
The Pentagon’s focus on data integrity, interoperability, and real-time feedback loops
underscores its dependence on secure, well-structured data. Initiatives like the Joint
AllDomain Command and Control (JADC2) illustrate this, as they integrate various data
streams from air, land, sea, cyber, and space to create a unified decision-making network
(Freedberg, 2021). JADC2 utilizes AI to filter, prioritize, and relay mission-critical
information across different services—boosting precision and cutting down on decision-
making delays.
Even with its sophisticated infrastructure, the U.S. AI strategy faces limitations due to ethical
guidelines and bureaucratic oversight, which can sometimes hinder its full potential.
However, these limitations are intentional safeguards to prevent misuse and maintain
alignment with democratic values.
China's approach to AI in warfare is all about swarm intelligence and autonomous systems,
pushing the boundaries of battlefield automation. Drawing inspiration from nature, their idea
of "intelligentized warfare" revolves around unmanned platforms that work together, often
with little to no human oversight. The Chinese People's Liberation Army (PLA) has poured
significant resources into developing swarming drone technology, underwater autonomous
vehicles, and robotic systems that leverage AI to operate seamlessly across various domains
(Kania & Costello, 2020).
A striking example of this strategy was showcased in a 2021 exercise where more than 200
AIdriven drones executed coordinated attack formations in simulated combat scenarios.
These swarms are capable of real-time communication, adapting to threats, and performing
synchronized maneuvers, which can easily outmatch traditional defense systems (Lin &
Singer, 2021). The China Academy of Electronics and Information Technology (CAEIT)
74
has noted that these systems symbolize “the future of warfighting—decentralized,
autonomous, and unpredictable” (CAEIT, 2021).
Additionally, China's strategy is shaped by the Military-Civil Fusion (MCF) policy, which
promotes the swift integration of commercial AI innovations into military applications.
Companies like DJI, Hikvision, and Baidu are key players in developing swarm
technologies, facial recognition systems, and autonomous navigation software. Unlike the
U.S., China tends to prioritize speed over ethical considerations, allowing for quicker
deployment of experimental systems (Allen, 2019).
At the heart of the U.S.-China AI rivalry lies a fundamental tension between speed and
control. The U.S. approach values data accuracy, ethical oversight, and collaboration between
humans and AI, while China focuses on scale, autonomy, and disruptive tactics on the
battlefield. These differing strategies are influenced by their political systems: the U.S. is
accountable to public and legislative scrutiny, whereas China's centralized governance allows
for swift implementation without public discussion.
The strength of the U.S. military is found in its advanced command structures, which
seamlessly integrate AI into operations across multiple domains. Systems like JADC2
improve cooperation among forces and allies, fostering coalition warfare that prioritizes
precision and situational awareness (NSCAI, 2021). On the flip side, China's asymmetric
strategy seeks to overwhelm these systems through nonlinear tactics, employing AI
swarms, deception, and electronic disruption.
75
Dr. Elsa Kania, a prominent expert on China's military AI, points out, “The Chinese
military sees AI not merely as a tool, but as a game changer that can redefine the nature of
warfare itself” (Kania, 2020). Meanwhile, American defense officials caution that if the U.S.
fails to keep pace with China's rapid AI integration, it could face “technological surprise” in
future conflicts (DoD, 2023).
In the end, these strategic differences indicate that while the U.S. and China might achieve
similar technological advancements, their applications will likely diverge significantly. The
U.S. is expected to keep focusing on responsible AI in warfare, while China may lean
towards autonomous saturation and quick disruption as a means to achieve military equality
or dominance.
The United States stands at the forefront of AI development, but it also grapples with some
serious vulnerabilities tied to the intricate web of AI systems used in military networks. The
use of AI in areas like surveillance, logistics, command-and-control, and autonomous
vehicles opens up new avenues for potential attacks from adversaries. A major concern is
adversarial machine learning, where malicious actors can manipulate AI algorithms by
feeding them deceptive inputs, leading to incorrect decisions (Brundage et al., 2018).
A clear example of this is adversarial image perturbation, which involves making slight
changes to visual inputs to trigger misclassification. In a military setting, this could result in
an AI system mistakenly identifying a threatening tank as a civilian vehicle or the other way
around. The Pentagon’s Defense Innovation Board has recognized these dangers, stressing
the importance of robust testing and red-teaming for AI models (Defense Innovation Board,
2019).
77
Retired U.S. Air Force General Jack Shanahan, who once led the Joint Artificial
Intelligence Center (JAIC), cautioned that “AI systems are only as secure as their weakest
digital links— and we need to treat AI security as national security” (Shanahan, 2020). In
light of these challenges, the Department of Defense has initiated AI-specific red-teaming and
resilience programs through the Test and Evaluation Directorate under the Chief Digital and
AI Office (CDAO).
One major worry regarding Chinese AI systems is their vulnerability to adversarial data
poisoning. With the vast amounts of surveillance data gathered from facial recognition
networks and social monitoring platforms, the threat of data injection attacks where small
segments of training data are deliberately corrupted is quite high. These attacks can
undermine entire neural networks, often going unnoticed until a failure occurs (Zhou et al.,
2020).
Moreover, China's strategy of military-civil fusion blurs the lines between civilian and
military AI infrastructures. While this can speed up innovation, it also raises the stakes for
espionage, malware insertion, and systemic cross-contamination from less secure civilian
platforms. For instance, iFlytek, a prominent AI company known for its voice recognition
technology, has faced accusations of using insecure software that is susceptible to
surveillance interception (Human Rights Watch, 2019).
Chinese AI systems also struggle with limited testing environments due to censorship and
restricted data sharing. This limitation raises concerns that PLA systems might be exposed to
unknown edge cases or could fail in real combat situations when faced with unconventional
or deceptive inputs. A 2021 report from the Chinese Academy of Sciences noted that “current
AI systems exhibit low tolerance to adversarial deception and perform poorly under
ambiguous operational conditions” (CAS, 2021).
Moreover, China's significant investment in autonomous drone swarms brings about some
distinct vulnerabilities. The AI that powers these swarms depends on decentralized
communication, which can be jammed, spoofed, or hacked. This could lead to drones being
78
turned against their own operators or becoming ineffective during missions. If these systems
lack robust encryption and backup protocols, they could easily be taken over or disabled by
more advanced technological adversaries.
While both countries grapple with significant AI vulnerabilities, their strategies for
addressing these issues are quite different. The United States leans towards a security-first,
ethics-driven approach, often opting to slow down deployment to ensure thorough
verification and safety. This involves comprehensive third-party audits, initiatives for
algorithmic transparency, and adversarial simulations. However, this cautious stance can
sometimes hinder innovation and provide an advantage to competitors who move more
quickly.
In contrast, China focuses on speed and scalability, frequently rolling out systems even before
they are fully secure or reliable. The decentralized nature of its AI swarms and automated
platforms can heighten operational risks, particularly in high-stress electromagnetic or cyber
environments (Lin & Singer, 2022).
Yet, China's centralized control and closer integration of military and civilian AI can facilitate
rapid responses and coordinated enhancements when vulnerabilities are identified. From a
geopolitical perspective, hacking, manipulating, and spoofing AI systems are now viewed as
strategic acts of warfare. The 2023 U.S. National Defense Strategy categorizes “AI system
compromise” as a major cyber threat, comparable to assaults on nuclear commandand-
control systems (DoD, 2023). Similarly, China’s 2022 White Paper on Military-Civil Fusion
recognizes AI infrastructure as a "national strategic asset that requires protection across
multiple domains" (PRC MoD, 2022).
Dr. Paul Scharre, an AI expert at CNAS, captured the dilemma succinctly: “We are entering
an era where an adversary may not need to destroy your system—they only need to corrupt
your data or trick your model, and the result could be catastrophic” (Scharre, 2022).
This shifting threat landscape compels both nations to emphasize secure architectures,
resilience against adversarial attacks, and the establishment of international norms for AI in
warfare. However, their current paths indicate very different levels of risk tolerance and
response strategies.
79
Chapter 6
The United Nations (UN) has taken a leading role in global efforts to regulate or outright ban
lethal autonomous weapons systems (LAWS). These systems are capable of identifying and
attacking targets without any human intervention. A prime example is drones that utilize AI
to autonomously locate and eliminate targets. Back in 2013, the nonprofit group Human
Rights Watch, along with the Harvard Law School’s International Human Rights Clinic,
kicked off a campaign known as the “Campaign to Stop Killer Robots.” This initiative
sparked international dialogue at the UN, particularly within the framework of the
Convention on Certain Conventional Weapons (CCW).
Since then, the CCW has convened several meetings where diplomats, scientists, and military
professionals have engaged in discussions about the dangers posed by autonomous weapons.
A consensus is emerging among most nations that human oversight is essential when it comes
to weaponry, especially in situations involving lethal force. However, a universal agreement
on banning or rigorously regulating autonomous weapons has yet to be reached.
80
Even though there’s no agreement yet, the momentum is definitely building. In 2023, over 70
countries took part in UN discussions in Geneva, with many calling for quicker action. A
report from the Stockholm International Peace Research Institute (SIPRI) warns that “failure
to regulate these systems could lead to uncontrolled arms races and greater risks of conflict”
(SIPRI, 2023).
One of the most significant threats posed by AI in warfare is the potential for accidental wars.
AI systems can act swiftly and sometimes in unpredictable ways. They might misinterpret
information, especially in complex and rapidly changing scenarios.
For instance, an AI system could confuse a training exercise for a real attack and respond
aggressively. If another nation reacts to that, it could spiral into a war, even though no one
intended for it to happen.
A 2020 study by the RAND Corporation cautioned that AI could “reduce decision time in
conflict situations to levels where human oversight is impossible,” raising the risk of
“catastrophic escalation due to misunderstanding or malfunction” (RAND, 2020). AI systems
learn from past data, but real-world conflicts can shift quickly. This means AI might make
decisions based on outdated or biased information. Additionally, adversaries could attempt to
deceive AI systems by providing false data—this tactic is known as an adversarial attack.
The accidental drone strike in Afghanistan in 2021, which tragically killed ten civilians,
including children, highlighted how reliance on technology can lead to devastating outcomes,
even when human operators are involved (New York Times, 2021). While this particular
incident didn’t involve AI, experts warn that similar risks will increase as AI systems take on
more decision- making roles.
To avoid dangerous mishaps and the misuse of AI in warfare, many experts argue that we
urgently need international treaties. These agreements could establish guidelines for when
and how AI can be deployed in weaponry, ensuring that humans remain accountable for
crucial decisions.
Unlike nuclear weapons, which are governed by treaties like the Non-Proliferation Treaty
(NPT), there are currently no global agreements in place for AI weapons. This lack of
81
regulation allows countries to develop autonomous weapons without disclosing their
functionalities or the limitations they adhere to. A 2020 report from the International
Committee of the Red Cross (ICRC) emphasized that “AI should not be allowed to remove
human responsibility from decisions to use lethal force,” advocating for robust international
laws to guarantee accountability and adherence to humanitarian law (ICRC, 2020).
The European Union has also begun to take regulatory steps. In 2021, the European
Parliament passed a resolution calling for a ban on “killer robots” and insisted on full human
oversight of AI military systems (European Parliament, 2021). The resolution highlighted that
“the decision to take a human life must never be delegated to a machine.” Many scholars and
ethicists back the concept of “meaningful human control.” This principle asserts that even if
AI assists in decision-making, a human must always review and approve those decisions.
Without this safeguard, it becomes unclear who would be held accountable if something goes
awry.
There are also worries that nations might exploit AI systems to launch cyberattacks or
manipulate information. For instance, AI could create deepfakes—realistic-looking fake
videos—to disseminate propaganda or incite panic. If such actions occur during a politically
charged moment, they could easily escalate into conflict. Since AI warfare impacts everyone,
including civilians and neutral nations, global collaboration is essential. Treaties could also
help mitigate the risk of an AI arms race among the United States, China, and other major
powers.
The use of AI in warfare brings up a host of ethical dilemmas. A key concern is about
accountability. If an AI system makes a grave error and causes the death of innocent people,
who should be held responsible? Is it the developer of the system? The military leader who
authorized its use? Or the nation that deployed it? In conventional warfare, we can hold
human soldiers and commanders accountable for their actions. However, with AI, tracing
responsibility becomes much more complicated. This ambiguity could allow nations to
sidestep accountability or evade consequences. Many ethicists contend that machines should
never be entrusted with the power to decide who lives or dies. Philosopher Peter Asaro refers
to this as the “moral hazard of delegating killing to machines” (Asaro, 2012). Others express
concern that employing AI in lethal situations dehumanizes warfare and could lower the
threshold for engaging in conflict. Religious organizations, peace advocates, and civil
society groups are
82
joining forces with scientists to advocate for strict regulations on AI weaponry. They
emphasize that humanity must not permit machines to take control of such critical decisions.
While it may take time to establish a legally binding treaty, experts believe it’s still feasible to
cultivate “norms” regarding AI usage. These norms are shared expectations or guidelines that
countries adhere to, even in the absence of formal agreements. For instance, many nations
already observe informal protocols against using chemical weapons or targeting hospitals
during conflicts. Similar norms could be established for AI, such as refraining from using AI
to target civilians or steering clear of fully autonomous weapons in complex scenarios.
International gatherings, academic symposiums, and think tanks are playing a crucial role in
shaping these norms. For example, the Global Partnership on AI (GPAI) and the Future of
Life Institute have put forth recommendations for the safe development of AI. The GPAI
comprises over 20 member countries, including the US, India, and the EU. Public awareness
and pressure can also make a difference. As with climate change and nuclear weapons, global
citizens and advocacy groups can push their governments to act responsibly.
83
Chapter 7
This study has delved into the swift evolution of artificial intelligence (AI) within military
settings, shedding light on the ethical, strategic, and political hurdles that come with its use.
Here are some key takeaways:
First off, the United States and China are at the forefront of the global race to develop AI
warfare technologies, each taking a unique path. The U.S. prioritizes precision, the ethical
integration of AI, and collaboration between humans and machines, while China leans
towards speed, autonomous drone swarms, and a centralized approach through its
MilitaryCivil Fusion strategy.
Secondly, AI systems are not without their vulnerabilities; they can be hacked, manipulated,
or fail altogether. Both the U.S. and China are grappling with cybersecurity threats, especially
as they increasingly rely on autonomous systems that operate at machine speed, often
outpacing human oversight. These weaknesses raise the stakes for accidental conflicts,
particularly in high-pressure or unclear situations.
Thirdly, the ethical questions surrounding lethal autonomous weapons (LAWs) remain a
contentious issue. There’s no global consensus on this matter. While over 70 countries and
the United Nations have pushed for binding agreements to limit or ban these weapons, major
military powers like the U.S., China, and Russia tend to favor voluntary guidelines instead of
strict legal regulations.
Lastly, the absence of international governance frameworks for AI creates a regulatory gap.
Without global treaties in place, countries might continue to develop AI weaponry
unchecked, raising the risk of an arms race, escalation of AI capabilities, and potential misuse
in cyber conflicts.
These insights underscore the pressing need for ethical leadership, coordinated diplomatic
efforts, and strong global policies.
84
2. Predictions for AI Warfare (2030–2050)
The next twenty years are poised to bring significant changes to warfare, as AI transitions
from a supportive role to a central player in defense strategies. Several trends are anticipated
to shape the future of AI warfare.
By 2030, we can expect many countries to roll out lethal autonomous drones, land robots, and
underwater vehicles that operate without real-time human supervision. Research from the
Stockholm International Peace Research Institute (SIPRI, 2023) shows that over 50 nations
are either developing or testing these systems.
China is already experimenting with AI-driven drones that can launch coordinated attacks in
swarms. Meanwhile, the U.S. is pushing forward with autonomous platforms through
initiatives like DARPA’s OFFSET (Offensive Swarm-Enabled Tactics), which aims to
deploy 250 or more drones in urban settings (DARPA, 2021).
Looking ahead to 2040–2050, AI systems could take the lead in certain battlefield scenarios,
making machine-on-machine combat a reality. This evolution brings up important questions
about how quickly decisions can be made, how reliable they are, and the lack of human moral
judgment in high-pressure situations.
By 2050, military planning might be shaped more by predictive modeling and data
simulations than by generals in strategy rooms.
85
C. Algorithmic Arms Races and Cyber-Conflict
AI will also be weaponized in the realm of cyberwarfare, used for disinformation campaigns,
manipulating satellite networks, or spoofing enemy systems. The danger of “black box”
decision-making—where the outcomes are opaque and unexplainable—could become a
significant risk, particularly in nuclear command and control scenarios. A report from the
RAND Corporation raises a serious concern: algorithmic escalation—where autonomous
systems might accidentally spark a war due to their preprogrammed responses—could pose
one of the biggest threats by 2040 (RAND, 2020).
To steer clear of the disastrous potential of AI warfare, the global community needs to take
action right away. Here are some solid policy recommendations aimed at fostering safety,
accountability, and peace.
Just like chemical and biological weapons, lethal autonomous weapons should be regulated or
even banned under international law. The United Nations and its member states should build
on the CCW framework to establish a binding agreement that ensures humans maintain
meaningful control over the use of lethal force. With over 30 countries already backing a ban,
it’s time to turn that momentum into real treaties.
86
Bibliography
• Congressional Budget Office. (2023). The U.S. Military Budget in Historical
Context.
https://www.cbo.gov .
• Kania, E. B. (2019). Battlefield Singularity: Artificial Intelligence, Military
Revolution, and China’s Future Military Power. Center for a New American Security.
•
Nakasone, P. M. (2021). Remarks at the National Security Commission on
Artificial Intelligence.
• Allen, G. (2019). Understanding China’s AI Strategy: Clues to Chinese
Strategic Thinking on Artificial Intelligence and National Security. Center for a
New American Security.
• Lin, H., & Singer, P. W. (2020). China’s AI-Powered Warfare: Are We Ready
for an AI Arms Race? War on the Rocks.
87
Submission ID trn:oid:::16158:98298642
Raj 6thSem
Ai_in_military_strategy_of_usa_and_china[1] (1).docx
Document Details
Submission ID trn:oid:::16158:98298642
81 Pages
Submission Date
File Name
Ai_in_military_strategy_of_usa_and_china[1] (1).docx
File Size
180.3 KB
6% Overall Similarity
The combined total of all matches, including overlapping sources, for each database.
Integrity Flags
0 Integrity Flags for Review No
suspicious text manipulations
found.
Our system's algorithms look deeply at a document for any inconsistencies that would set it apart
from a normal submission. If we notice something strange, we flag it for you to review.
A Flag is not necessarily an indicator of a problem. However, we'd recommend you focus your
attention there for further review.
The sources with the highest number of matches within the submission. Overlapping sources will not be display.