A security scanner for your LLM agentic workflows
-
Updated
Oct 22, 2025 - Python
A security scanner for your LLM agentic workflows
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
Code scanner to check for issues in prompts and LLM calls
Open-source LLM Prompt-Injection and Jailbreaking Playground
Hackaprompt v1.0 AIRT Agents
Projet issu du codelab Devfest Nantes 2025 “La guerre des prompts” : atelier de 2h pour apprendre à pirater des IA et comment les protéger via des frameworks open source
Run Repello Artemis security scans on your AI assets.
Add a description, image, and links to the ai-red-teaming topic page so that developers can more easily learn about it.
To associate your repository with the ai-red-teaming topic, visit your repo's landing page and select "manage topics."