Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views28 pages

Gen AI

The document discusses the dual-use nature of Generative AI, highlighting its potential to both empower cybersecurity defenses and facilitate cyberattacks. It covers foundational concepts of machine learning and deep learning, the ethical dilemmas surrounding AI use, and practical applications in threat intelligence and incident response. The document emphasizes the need for responsible AI use, regulation, and the importance of balancing innovation with risk management.

Uploaded by

Om Prakash Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views28 pages

Gen AI

The document discusses the dual-use nature of Generative AI, highlighting its potential to both empower cybersecurity defenses and facilitate cyberattacks. It covers foundational concepts of machine learning and deep learning, the ethical dilemmas surrounding AI use, and practical applications in threat intelligence and incident response. The document emphasizes the need for responsible AI use, regulation, and the importance of balancing innovation with risk management.

Uploaded by

Om Prakash Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Generative AI

Empowering Defenders or Equipping Attackers


(2025-07)

Dr. Abhishek Singh

AI Engineering
1
Overview
Topics We’ll Be Covering

T1: Foundations of Generative AI T3: Hands-on


Machine Learning Basics How to use Google AI Studio?
Deep Learning Basics PDF Summarization
Generative AI Prompt Injection
T2: Dual-use Nature
GenAI in Cybersecurity as a friend
GenAI: Cybersecurity’s worst nightmare

2
Foundation: Machine Learning
What is Machine Learning?

● Machine Learning (ML) is a field of artificial intelligence that


empowers computers to learn from data without explicit
programming. Instead of being explicitly programmed with
rules, ML algorithms learn patterns from data and make
predictions or decisions based on those patterns.

Core of Machine Learning?

● Data - Any unprocessed fact, value, text, sound or picture.


● Model - Learning general trend from data.

Features of Machine Learning?

● ML techniques are able to find and highlight various patterns


in the data very quickly in comparison to human beings, who
may miss these patterns due to the size of the data.
● ML techniques make informed inferences on a wide range of
problems, from helping diagnose diseases to coming up with
solutions for critical business deals.
3
Foundation: k- Nearest Neighbors
Activity:
● Form groups of close friends among
yourselves except one student.
● Ask everybody to rate themself on a scale
of 1-10 in studies and sports including the
student left in previous step.
● With the help of unknown student ratings,
choose the appropriate group he/she
should belong to.
Solution: “You tell me about your interests
and I will guess the group you are likely to
belong”

4
Foundation: k- Nearest Neighbors
Solution:
● Select the unlabeled data point.
● Measure the distance from unlabeled data
point to all other data points.
● Sort the distance and take top “k” labeled
data points.
● Count the number of members from each
group across top “k” points.
● Assign the label whose members have
higher count.
● In case of tie, choose randomly.

5
Foundation: k- Nearest Neighbors
Advantages:
● Natural method.
● Simplest among all.
● Requires no explicit training or model.
● Can be extended to other models (graph based methods).
Limitations:
● Computationally intensive.
● Difficult to decide value of “k”.
● Susceptible to noise and curse of dimensionality.

6
Foundation: Deep Learning Basics
Deep Learning:
● It is a subset of Machine Learning, involves training
artificial neural networks with multiple layers to learn
complex patterns in data. The very complexity that makes
deep learning so powerful.
● At its core, deep learning mimics the structure and
function of the human brain. Neural networks, the building
blocks of deep learning, consist of interconnected nodes
called neurons. These neurons process information in
layers, with each layer learning increasingly complex
features from the input data.
Artificial Neural Networks (ANNs):
● ANNs are composed of layers of interconnected nodes
(neurons) that process and transform inputs into
meaningful outputs. Each node applies a non-linear
transformation to the input data, allowing the network to
learn complex patterns and relationships.
7
Foundation: Generative AI
● Generative AI models are trained on massive
datasets and use statistical patterns to generate
realistic new data.
● Examples include ChatGPT (text), DALL·E (images),
and Codex (code).
● These models work by predicting the next word,
pixel, or code line based on context and probability.
● The output can often seem creative or human-like,
even though it is generated statistically.
● Generative AI is a form of artificial intelligence that is
now widely accessible through web apps, APIs, and
open-source tools.

8
GenAI: The Promise and Perils
● Generative AI refers to systems that create new
content (text, images, audio, code) based on data
they have been trained on.
● These systems can assist humans in writing,
designing, coding, and simulating ideas rapidly.
● However, the same capabilities can be exploited to
generate malicious content like fake news or phishing
emails.
● Let’s explore how generative AI is being used in both
security defense and in cybercrime.
● Our goal is to develop a nuanced understanding of its
dual-use nature: both beneficial and harmful.

9
GenAI: Dual-Use Nature
● The term "dual-use" refers to technology that can be
used for both helpful and harmful purposes.
● Generative AI can empower defenders to detect
threats or help attackers create them.
● The ethical impact depends on the user’s intent, not
the technology itself.
● For example, AI can write a cybersecurity guide - or it
can write malware.
● Recognizing this dual nature is critical to managing
risk and guiding responsible AI use.

10
GenAI: Cybersecurity Defense
● Generative AI can analyze large volumes of log data
to identify patterns that suggest cyberattacks.
● It can generate readable summaries of vulnerabilities
and suggest mitigation strategies to IT teams.
● AI can simulate realistic attack scenarios to test and
improve an organization’s defense posture.
● These tools reduce analyst workload by automating
routine tasks like alert triage and documentation.
● Generative AI enables faster and more accurate
decision-making during incidents.

11
GenAI: AI-Powered Threat Intelligence
● AI can crawl websites, forums, and social media to
gather early warning signs of cyber threats.
● It can analyze hacker discussions on the dark web to
identify new attack techniques or tools.
● Summarization features allow teams to process
thousands of alerts and threat reports quickly.
● AI can map behaviors and tactics used by specific
threat actors using natural language processing.
● This makes threat intelligence more proactive and
less reliant on manual research.

12
GenAI: Incident Response and Recovery
● During a security incident, AI can help generate
containment and recovery steps based on prior
cases.
● It can assist in identifying the root cause by
correlating data from multiple sources.
● Generative AI can automate reporting, translating
complex events into understandable formats for all
stakeholders.
● This reduces recovery time and minimizes business
impact.
● By augmenting human analysts, AI makes response
more efficient and less error-prone.

13
GenAI: Security Education and Training
● AI can generate custom phishing emails for
employee awareness training.
● It can create dynamic, role-specific cyberattack
simulations to teach defense tactics.
● Learners can interact with AI tutors to understand
how real-world threats work.
● These tools make cybersecurity education more
engaging and tailored to individual skill levels.
● AI helps scale training programs across large
organizations with consistent quality.

14
GenAI: Recent News
● Google's AI Big Sleep detected and stopped a
cyber attack before it started
● Big Sleep combines threat intelligence with AI
for proactive security.
● Actively searches and finds unknown security
vulnerabilities in software.

15
GenAI: Attackers Leverage
● Attackers use AI to generate malware code, often
designed to avoid detection by antivirus tools.
● They create highly convincing phishing emails with
flawless language and formatting.
● AI tools can automate the creation of fake identities
and social engineering scripts.
● Even inexperienced individuals can now launch
sophisticated attacks using generative AI.
● Cybercrime becomes cheaper, faster, and easier due
to automation.

16
GenAI: Deepfakes and Misinformation
● Generative AI can synthesize human voices and
faces, making deepfakes look very realistic.
● These deepfakes can be used in scams (e.g.,
impersonating a CEO to authorize a bank transfer).
● They can also spread false information during
elections or crises.
● The technology evolves faster than detection
methods, making response difficult.
● This undermines trust in photos, videos, and even
voice recordings.

17
GenAI: AI-Enhanced Crimeware as a Service
● Some dark web marketplaces offer AI-generated
malware as a service.
● These tools often come with user-friendly interfaces
and automation features.
● Attackers can pay monthly fees to access
AI-enhanced attack tools, no expertise required.
● This trend is making cybercrime more scalable and
accessible than ever before.
● Security professionals must now defend against
smarter, cheaper, and automated threats.

18
GenAI: Prompt Injection and Jailbreak Attacks
● Prompt injection involves manipulating the AI’s
behavior by inserting hidden commands in user input.
● Attackers can bypass model safeguards and make AI
generate harmful content.
● This technique is especially dangerous in
applications like chatbots or virtual assistants.
● The AI may not recognize malicious prompts
embedded in harmless-looking text.
● Developers must secure prompts and monitor context
to prevent exploitation.

19
GenAI: Real-World Examples of Defensive Use
● Microsoft Security Copilot helps analysts understand
threats and respond faster.
● DARPA uses generative AI to simulate attacker
behavior and stress-test systems.
● Financial institutions use AI to detect fraudulent
transactions in real time.
● AI chatbots assist governments in public cyber
awareness campaigns.
● These examples show how generative AI can
strengthen defense if used responsibly.

20
GenAI: Misuse in the Real World
● WormGPT was created as an unrestricted AI for
cybercriminal use.
● Scammers use AI to create fake romantic partners for
long-term fraud.
● A deepfake of a company executive led to a major
wire fraud case.
● LLMs have been jailbroken to generate malware,
phishing kits, and even terrorist manifestos.
● These examples illustrate the real-world risks of
unregulated generative AI tools.

21
GenAI: Recent News

22
GenAI: Ethical Dilemmas and Responsibility
● Should model creators be responsible for how their
AI is used?
● What role do governments play in setting rules for
AI?
● Is open-source AI too dangerous in the wrong hands?
● Should we ban dual-use models or just monitor them
more closely?
● These are complex questions with no easy answers -
policy, ethics, and technology must align.

23
GenAI: Building Safer Systems
● Developers can use techniques like watermarking to
mark AI-generated content.
● Safety training helps models refuse dangerous or
unethical requests.
● Red-teaming involves testing AI models for
vulnerabilities before release.
● Adversarial training teaches models to resist
manipulation by bad actors.
● Secure input management and logging helps trace
and prevent prompt injection attacks.

24
GenAI: Regulations and Governance
● Governments could require licenses or audits for
high-risk AI models.
● Companies may be asked to publish safety reports
and testing results.
● Global collaboration is essential to prevent AI misuse
across borders.
● Shared databases of malicious prompts and use
cases can improve defense.
● Standards for responsible AI development need to be
adopted by the industry.

25
GenAI: Innovation vs. Risk
● Generative AI offers incredible benefits in science,
education, and defense.
● But unchecked use can lead to widespread harm and
abuse.
● Innovation must be paired with responsible design,
oversight, and education.
● Transparency, testing, and public dialogue are
essential to move forward safely.
● The goal is not to ban AI - but to use it wisely and
ethically.

26
Let us move to
Hands-on
session

You might also like