Agent Indoctrination β AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework π
-
Updated
Nov 25, 2025 - Python
Agent Indoctrination β AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework π
Recon-Level Audit of Claude 4 β Obfuscated, Ethical & Technically Precise
π Ethical red-team audit of Claude 4 with clear introspection and policy visibility. Includes JSON data and Python tooling; Mermaid diagrams map model behavior.
Add a description, image, and links to the llm-audit topic page so that developers can more easily learn about it.
To associate your repository with the llm-audit topic, visit your repo's landing page and select "manage topics."