A rationalist ruleset for "debugging" LLMs, auditing their internal reasoning and uncovering biases; also a jailbreak.
-
Updated
Nov 1, 2025
A rationalist ruleset for "debugging" LLMs, auditing their internal reasoning and uncovering biases; also a jailbreak.
Symbolic containment system for recursive logic traps — ARCHON firewall + audit
Add a description, image, and links to the rationalism topic page so that developers can more easily learn about it.
To associate your repository with the rationalism topic, visit your repo's landing page and select "manage topics."