Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.
-
Updated
Jul 18, 2025 - Go
Guardrails for LLMs: detect and block hallucinated tool calls to improve safety and reliability.
AxonFlow — Source-available AI control plane for production LLM systems
A safer rm command that moves files to trash instead of permanently deleting them. Protects against accidental deletion by humans and AI agents.
Add a description, image, and links to the ai-safety topic page so that developers can more easily learn about it.
To associate your repository with the ai-safety topic, visit your repo's landing page and select "manage topics."