Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
-
Updated
Apr 12, 2024
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
Exploring MCP security: prompt injections, leaks, auth bypasses, OPA policies, and quirky edge cases. A playground for safe, educational, and cutting-edge model security experiments.
Add a description, image, and links to the prompt-leaking topic page so that developers can more easily learn about it.
To associate your repository with the prompt-leaking topic, visit your repo's landing page and select "manage topics."