You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Complete AI governance and LLM Evals platform with support for EU AI Act, ISO 42001, NIST AI RMF and 20+ more AI frameworks and regulations. Join our Discord channel: https://discord.com/invite/d3k3E4uEpR
NIST AI RMF applied to a real research cluster. We govern our own AI tools, document decisions, and share templates. Practical governance for small teams.
nod is a platform-agnostic, rule-based linter that ensures AI/LLM specifications contain critical security and compliance elements before any agentic or automated development begins.
Operationalizes PM insights through working agents grounded in GRC best practices. Provides prompt libraries and tools to identify governance and compliance risks before scaling programs, analytics initiatives, or AI systems.
Hardened Public Release of KAIROS invocation governance framework. Includes invocation terms, ethical compliance clauses, regulatory mapping, and sample outputs. Licensed under CC BY-NC-ND 4.0. License: Do not auto-generate via GitHub. Use hardened License.txt
White-paper & talk covering benefits, risks, and mitigation frameworks for AI and LLMs in cybersecurity (NIST AI RMF, OWASP Top 10 for LLMs, MITRE ATLAS, real-world case studies)
Lightweight AI Governance Risk Assessment tool to score AI models across key risk factors (data quality, bias, privacy, explainability, robustness) with PDF report generation using Streamlit.