Status: Draft. This project is under heavy development and may change without notice. We welcome input, issues, and contributions.
⚠️ WARNING This codebase is currently being developed on thecrml-dev-1.3branch. For the latest work-in-progress and source of truth, see: https://github.com/Faux16/crml/tree/crml-dev-1.3
Version: 1.2
Maintained by: Zeron Research Labs and CyberSec Consulting LLC
Supported by:
- Community contributors and early adopters
CRML is an open, declarative, engine-agnostic and Control / Attack framework–agnostic Cyber Risk Modeling Language. It provides a YAML/JSON format for describing cyber risk models, telemetry mappings, simulation pipelines, dependencies, and output requirements — without forcing you into a specific quantification method, simulation engine, or security-control / threat catalog.
CRML enables RaC (Risk as Code): risk and compliance assumptions become versioned, reviewable artifacts that can be validated and executed consistently across teams and tools.
Cyber security, compliance, and risk management professionals often face the same practical problems:
- Risk models are locked in spreadsheets, slide decks, or proprietary tools, making them hard to review, audit, reproduce, and automate.
- Control effectiveness and “defense in depth” assumptions are documented inconsistently, so results vary by analyst and by quarter.
- Threat and control frameworks (e.g., ATT&CK, CIS, NIST, ISO, SCF, internal catalogs) change over time; do not provide a consistent machine readable format; mappings are brittle and rarely versioned.
- Quantification engines differ (FAIR-style Monte Carlo, Bayesian/QBER, actuarial models, internal platforms), causing costly rewrites and re-interpretation.
- Audit-ready evidence is fragmented: “what was modeled, with which parameters, using which data, and producing which outputs” is hard to prove.
CRML addresses this by standardizing the description of cyber risk models and their inputs/outputs, so different engines and organizations can exchange and execute the same model with clear validation and traceability.
Qualitative methods (red/amber/green, “high/medium/low”, maturity scores) are useful for communication and prioritization, but they tend to break down when you need to:
- Justify security spend (or a new security product) by comparing expected risk with vs. without the investment
- Compare risk consistently across business units, vendors, or time periods
- Show measured risk reduction from controls (not just “improved posture”)
- Connect cyber risk to enterprise risk, insurance, and financial planning
- Produce repeatable, audit-ready evidence of “how we calculated this number”
The next evolution is quantified risk management: treating cyber risk as an estimable distribution of outcomes, grounded in explicit assumptions and data, and computed by repeatable methods. But quantified approaches only scale when models are standardized — so they can be validated, reviewed, reused, and executed across tools and teams.
CRML’s goal is to be this standard: it makes the model portable, the assumptions explicit, and the results reproducible.
- Control effectiveness modeling — quantify how controls reduce risk (including defense-in-depth)
- Median-based parameterization — specify medians directly for lognormal distributions
- Multi-currency support — model across currencies with automatic conversion
- Auto-calibration — calibrate distributions from loss data
- Strict validation — JSON Schema validation catches errors before simulation
- Implementation-agnostic — works with any compliant simulation engine
- Human-readable YAML — easy to read, review, and audit
Imagine a near-future where CRML is as normal to risk work as IaC is to infrastructure:
- A security architect proposes a new control program by updating CRML documents; the change is peer-reviewed in Git with clear diffs.
- GRC and audit teams can trace every metric back to a validated, versioned model (inputs, assumptions, mappings, outputs).
- Different quant engines (vendor platforms, internal FAIR Monte Carlo, Bayesian QBER, insurance actuarial models) all consume the same CRML documents.
- Framework changes are handled by updating catalogs/mappings (also versioned), rather than rewriting the model logic.
- Organizations can exchange models with partners, insurers, and regulators without sending spreadsheets or screenshots.
- A cyber security authority can publish its yearly threat landscape report in CRML — encoding richer nuance than narrative PDFs (assumptions, distributions, dependencies, control baselines, and mappings) — and in turn benefit from more standardized, machine-readable data submissions from industry.
In that world, cyber risk becomes reproducible, comparable, and automatable across teams — while still allowing methodological diversity.
See General Architecture: wiki/Concepts/Architecture.md
A typical organization might keep CRML alongside detection and infrastructure code:
risk/models/— scenarios and portfolios in CRMLrisk/catalogs/— versioned control + attack catalogs (internal or external)risk/mappings/— telemetry/control/threat mappings with ownership and change history- CI runs
crml-lang validateon every PR; a nightly job runscrml simulateand publishes dashboards
Example snippet (illustrative):
crml_scenario: "1.0"
meta:
name: "ransomware-baseline"
description: "A simple ransomware risk model"
scenario:
frequency:
basis: per_organization_per_year
model: poisson
parameters:
lambda: 0.15
severity:
model: lognormal
parameters:
median: "250 000"
currency: USD
sigma: 1.2
# Optional, threat-centric controls (org posture typically belongs in portfolios/assessments)
controls:
- id: "org:iam.mfa"
effectiveness_against_threat: 0.35This repository ships two Python packages and a web UI:
crml-lang: language/spec models + schema validation + YAML IOcrml-engine: reference runtime +crmlCLI (depends oncrml-lang)web/: CRML Studio — browser UI for validation and simulation (Next.js)
If you want the CLI:
pip install crml-engineIf you only want the language library:
pip install crml-lang
# or with SCF support:
pip install "crml-lang[scf]"crml-lang validate examples/scenarios/qber-enterprise.yaml
crml simulate examples/scenarios/data-breach-simple.yaml --runs 10000
# Import SCF Catalog from Excel
crml-lang scf-import-catalog path/to/SCF_2025.xlsx scf-catalog.yamlLoad and validate:
from crml_lang import CRScenario, validate
scenario = CRScenario.load_from_yaml("examples/scenarios/data-breach-simple.yaml")
report = validate("examples/scenarios/data-breach-simple.yaml", source_kind="path")
print(report.ok)Run a simulation:
from crml_engine.runtime import run_simulation
result = run_simulation("examples/scenarios/data-breach-simple.yaml", n_runs=10000)
print(result.metrics.eal)crml_lang/— language/spec packagecrml_engine/— reference engine packageweb/— web UI (Next.js)examples/— example CRML YAML models and FX configwiki/— documentation source (MkDocs)
CRML Studio lives in web/.
Run it locally:
pip install crml-engine
cd web
npm install
npm run devSee the docs under wiki/ (start at wiki/Home.md).
OSCAL interoperability and mapping rules: wiki/Guides/OSCAL.md.
SCF integration and mapping guide: wiki/Guides/SCF.md.
Current document types:
- Scenario documents:
crml_scenario: "1.0"with top-levelscenario: - Portfolio documents:
crml_portfolio: "1.0"with top-levelportfolio:
MIT License — see LICENSE.