Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
35 views3 pages

Owasp Top 10 LLM

The OWASP Top 10 for LLM outlines critical vulnerabilities associated with large language models, including prompt injection, insecure output handling, and training data poisoning. It emphasizes the risks of sensitive information disclosure, overreliance on LLMs, and model theft, which can lead to severe security breaches and economic losses. The document provides prevention strategies to mitigate these vulnerabilities, such as enforcing privilege controls and implementing human oversight.

Uploaded by

sriram b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views3 pages

Owasp Top 10 LLM

The OWASP Top 10 for LLM outlines critical vulnerabilities associated with large language models, including prompt injection, insecure output handling, and training data poisoning. It emphasizes the risks of sensitive information disclosure, overreliance on LLMs, and model theft, which can lead to severe security breaches and economic losses. The document provides prevention strategies to mitigate these vulnerabilities, such as enforcing privilege controls and implementing human oversight.

Uploaded by

sriram b
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

OWASP Top 10 for LLM

VERSION 1.0
Published: August 1, 2023
| OWASP Top 10 for LLM v1.0

OWASP Top 10 for LLM


LLM01 LLM02 LLM03 LLM04 LLM05

Prompt Injection Insecure Output


Training Data
Model Denial of
Supply Chain
This manipulates a large language
Handling Poisoning Service Vulnerabilities
model (LLM) through crafty inputs, This vulnerability occurs when an LLM Training data poisoning refers to Attackers cause resource-heavy LLM application lifecycle can be
causing unintended actions by the LLM. output is accepted without scrutiny, manipulating the data or fine-tuning operations on LLMs, leading to service compromised by vulnerable
Direct injections overwrite system exposing backend systems. Misuse process to introduce vulnerabilities, degradation or high costs. The components or services, leading to
prompts, while indirect ones manipulate may lead to severe consequences like backdoors or biases that could vulnerability is magnified due to the security attacks. Using third-party
inputs from external sources. XSS, CSRF, SSRF, privilege escalation, or compromise the model’s security, resource-intensive nature of LLMs and datasets, pre- trained models, and
remote code execution. effectiveness or ethical behavior. unpredictability of user inputs. plugins add vulnerabilities.

LLM06 LLM07 LLM08 LLM09 LLM10

Sensitive Information Insecure Plugin


Excessive Agency Overreliance Model Theft
Disclosure Design LLM-based systems may undertake Systems or people overly depending on This involves unauthorized access,
LLM’s may inadvertently reveal LLM plugins can have insecure inputs actions leading to unintended LLMs without oversight may face copying, or exfiltration of proprietary
confidential data in its responses, and insufficient access control due to consequences. The issue arises from misinformation, miscommunication, LLM models. The impact includes
leading to unauthorized data access, lack of application control. Attackers excessive functionality, permissions, or legal issues, and security vulnerabilities economic losses, compromised
privacy violations, and security can exploit these vulnerabilities, autonomy granted to the LLM-based due to incorrect or inappropriate content competitive advantage, and potential
breaches. Implement data sanitization resulting in severe consequences like systems. generated by LLMs. access to sensitive information.
and strict user policies to mitigate this. remote code execution.
| OWASP Top 10 for LLM v1.0

EXAMPLES
LLM01
Direct prompt injections overwrite system prompts

Indirect prompt injections hijack the conversation context

A user employs an LLM to summarize a webpage containing an indirect

Prompt Injection prompt injection.

Attackers can manipulate LLM’s through PREVENTION

crafted inputs, causing it to execute the Enforce privilege control on LLM access to backend systems

Implement human in the loop for extensible functionality


attacker's intentions. This can be done Segregate external content from user prompts

directly by adversarially prompting the Establish trust boundaries between the LLM, external sources, and

extensible functionality.
system prompt or indirectly through

manipulated external inputs, potentially AT TACK SCEN ARIOS

leading to data exfiltration, social An attacker provides a direct prompt injection to an LLM-based support

chatbot
engineering, and other issues. An attacker embeds an indirect prompt injection in a webpage

A user employs an LLM to summarize a webpage containing an indirect

prompt injection.

You might also like