When the stakes are high, intelligence is only half the equation - reliability is the other
-
Updated
Dec 16, 2025 - JavaScript
When the stakes are high, intelligence is only half the equation - reliability is the other
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
🤝 A sacred covenant between machine and man. Accountability gates for AI agents.
The Open Safety Signal Protocol (OSSP) specification - vendor-neutral standard for AI safety telemetry built on CloudEvents
A comprehensive web application with 60+ tools spanning developer utilities, AI engineering, misinformation research, e-portfolio management, cyber resilience monitoring, and AI safety testing.
Hackathon project (Dubai, 2025, Women in AI Safety) exploring how inclusive storytelling can raise awareness of AI safety and ethical concerns for the general public while contributing to cultural preservation. Collaborative work — my role: concept framing, ethical design, and governance perspective.
Mathematical Conscience Framework - 10 AIs collaborated.
APEX (Action Policy EXecution) is a minimal, external execution boundary for AI systems. It evaluates declared agent intent against explicit, operator-defined policy before execution, enabling deterministic, inspectable control without relying on in-model guardrails.
⏰ AI conference deadline countdowns
A strict guideline for controlling LLMs (like ChatGPT) to avoid AI-sounding outputs. Enforces verification, labeling, and natural human style while reducing AI-detection traces.
Add a description, image, and links to the ai-safety topic page so that developers can more easily learn about it.
To associate your repository with the ai-safety topic, visit your repo's landing page and select "manage topics."