Thanks to visit codestin.com
Credit goes to github.com

Skip to content
#

ai-security

Here are 130 public repositories matching this topic...

A deliberately vulnerable banking application designed for practicing Security Testing of Web App, APIs, AI integrated App and secure code reviews. Features common vulnerabilities found in real-world applications, making it an ideal platform for security professionals, developers, and enthusiasts to learn pentesting and secure coding practices.

  • Updated Oct 13, 2025
  • Python

Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily

  • Updated Oct 17, 2025
  • Python

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

  • Updated Jun 29, 2025
  • Python

Improve this page

Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."

Learn more