An open-source framework for automated compliance analysis across any technology stack and regulatory framework (GDPR, ePrivacy, EU AI Act, NIS2, DORA, etc.). Integrates directly into CI/CD pipelines to generate compliance documentation without manual questionnaires or spreadsheets.
The Waivern Compliance Framework provides:
- Framework Libraries - Core abstractions, multi-provider LLM support, and built-in components
- WCT (Waivern Compliance Tool) - CLI application for orchestrating compliance analysis
- Schema-Driven Architecture - Type-safe component communication through JSON Schema
- Extensible Design - Open standards for connectors, analysers, and rulesets
- Connectors - Extract data from sources (MySQL, SQLite, files, source code)
- Analysers - Detect compliance issues (personal data, processing purposes, data subjects)
- Rulesets - YAML-based pattern definitions for static analysis
- Runbooks - YAML configurations defining artifacts and their dependencies
- Orchestration - Planner validates and flattens runbooks; DAGExecutor runs artifacts in parallel
This project uses uv for dependency management. Install via standalone installer or homebrew.
# Install all dependencies
uv sync
# Install with development tools
uv sync --group dev
# Install pre-commit hooks (recommended)
uv run pre-commit install# Copy environment template
cp apps/wct/.env.example apps/wct/.env
# Edit apps/wct/.env with your API key
# ANTHROPIC_API_KEY=your_api_key_here
# Test configuration
uv run wct test-llm# Simple file analysis
uv run wct run apps/wct/runbooks/samples/file_content_analysis.yaml
# Comprehensive LAMP stack analysis
uv run wct run apps/wct/runbooks/samples/LAMP_stack_lite.yaml
# Run with verbose logging
uv run wct run apps/wct/runbooks/samples/file_content_analysis.yaml -vOutput: JSON results are written to ./outputs directory.
# Run analysis
uv run wct run apps/wct/runbooks/samples/file_content_analysis.yaml
uv run wct run apps/wct/runbooks/samples/file_content_analysis.yaml -v # Verbose
# Resume an interrupted or failed run
uv run wct run analysis.yaml --resume <run-id>
# List recorded runs
uv run wct runs
uv run wct runs --status failed
# Poll batch job status (when using LLM batch mode)
uv run wct poll <run-id>
# List components
uv run wct connectors
uv run wct processors # Lists analysers
uv run wct rulesets
uv run wct exporters
# Validate runbook
uv run wct validate-runbook apps/wct/runbooks/samples/file_content_analysis.yaml
# Generate JSON Schema for IDE support
uv run wct generate-schemaname: "Personal Data Analysis"
description: "Detect personal data in files and databases"
artifacts:
# Source artifact - extracts data from filesystem
file_content:
name: "File Content Extraction"
description: "Read files from the filesystem"
source:
type: "filesystem"
properties:
path: "./sample_file.txt"
# Derived artifact - processes file content for personal data
personal_data_findings:
name: "Personal Data Detection"
description: "Detect personal data patterns in content"
inputs: file_content
process:
type: "personal_data"
properties:
pattern_matching:
ruleset: "local/personal_data/1.0.0"
evidence_context_size: "medium"
llm_validation:
enable_llm_validation: true
output: true # Include in final outputSample Runbooks:
apps/wct/runbooks/samples/file_content_analysis.yaml- Basic file analysisapps/wct/runbooks/samples/LAMP_stack_lite.yaml- File, database, and source code analysisapps/wct/runbooks/samples/LAMP_stack.yaml- Advanced MySQL-based analysis- See
apps/wct/runbooks/README.mdfor detailed documentation
waivern-compliance/
βββ libs/ # Framework libraries (13 standalone packages)
β βββ waivern-core/ # Core abstractions
β βββ waivern-llm/ # Multi-provider LLM support
β βββ waivern-connectors-*/ # Connectors (mysql, sqlite, filesystem, source-code)
β βββ waivern-*-analyser/ # Analysers (personal-data, data-subject, processing-purpose)
β βββ waivern-*-shared/ # Shared utilities (rulesets, analyser utils, database utils)
βββ apps/
βββ wct/ # CLI application (plugin host)
βββ runbooks/ # YAML runbook configurations
βββ src/wct/ # Component discovery via entry points
Framework Independence:
- Libraries have no WCT dependencies
- Can be used by other applications
- Independent versioning and releases
- Clear separation of concerns
Runbook (YAML) β Planner β DAGExecutor β Connector/Analyser β Findings (JSON)
- Runbook defines artifacts (sources and transformations) and their dependencies
- Planner parses runbook, flattens child runbooks, builds DAG, validates schemas
- DAGExecutor runs artifacts in dependency order (parallel where possible)
- Connectors extract data; Analysers transform and analyse data
- Message objects provide automatic schema validation
- Results output as structured JSON
- Components declare input/output schemas (JSON Schema format)
- Executor automatically matches schemas between connectors and analysers
- Runtime validation through Message objects
- Type-safe interfaces throughout
See: WCF Core Concepts for detailed framework documentation.
uv run pytest # Run all tests
uv run pytest -v # Verbose output
uv run pytest -m integration # Integration tests (requires API keys)
uv run pytest -m batch # Batch API tests (requires API keys, may take minutes)Package-centric architecture where each package owns its configuration:
# Workspace-level (all packages)
./scripts/lint.sh # Lint all packages
./scripts/format.sh # Format all packages
./scripts/type-check.sh # Type check all packages
./scripts/dev-checks.sh # Run all checks + tests
# Package-level
cd libs/waivern-core && ./scripts/lint.sh
cd apps/wct && ./scripts/type-check.sh
# Pre-commit hooks
uv run pre-commit install # Install (once)
uv run pre-commit run --all-files # Run manuallyComponents use the ComponentFactory pattern with dependency injection:
from typing import override
from pydantic import BaseModel
from waivern_core import Connector, Message, Schema, ComponentFactory
from waivern_core.schemas import StandardInputSchema
class MyConnectorConfig(BaseModel):
"""Configuration for MyConnector."""
path: str
encoding: str = "utf-8"
class MyConnector(Connector):
def __init__(self, config: MyConnectorConfig):
self.config = config
@classmethod
@override
def get_name(cls) -> str:
return "my_connector"
@classmethod
@override
def get_supported_output_schemas(cls) -> list[Schema]:
return [StandardInputSchema()]
@override
def extract(self, output_schema: Schema) -> Message:
data = {"text": "extracted_content"}
return Message(
id="connector_output",
content=data,
schema=StandardInputSchema()
)
class MyConnectorFactory(ComponentFactory[MyConnector]):
@override
def create(self, properties: dict) -> MyConnector:
config = MyConnectorConfig.from_properties(properties)
return MyConnector(config)from typing import override
from pydantic import BaseModel
from waivern_core import Analyser, Message, Schema, ComponentFactory, InputRequirement
class MyAnalyserConfig(BaseModel):
"""Configuration for MyAnalyser."""
threshold: float = 0.8
class MyAnalyser(Analyser):
def __init__(self, config: MyAnalyserConfig):
self.config = config
@classmethod
@override
def get_name(cls) -> str:
return "my_analyser"
@classmethod
@override
def get_input_requirements(cls) -> list[list[InputRequirement]]:
# Declare supported input schema combinations
return [[InputRequirement("standard_input", "1.0.0")]]
@classmethod
@override
def get_supported_output_schemas(cls) -> list[Schema]:
return [MyFindingSchema()]
@override
def process(self, inputs: list[Message], output_schema: Schema) -> Message:
# Process all input messages (supports fan-in)
findings = []
for message in inputs:
findings.extend(self._analyse(message.content))
return Message(
id="results",
content={"findings": findings},
schema=output_schema
)
class MyAnalyserFactory(ComponentFactory[MyAnalyser]):
@override
def create(self, properties: dict) -> MyAnalyser:
config = MyAnalyserConfig.from_properties(properties)
return MyAnalyser(config)Key Features:
- ComponentFactory pattern for instantiation
- Configuration via Pydantic models
- Dependency injection support
- Automatic schema validation
- Type-safe interfaces
Runbooks support JSON Schema validation for:
- Real-time validation
- Autocomplete
- Documentation on hover
- Structure guidance
π Setup: IDE Integration Guide
π Browse good first issues
- Fork the repository
- Create a feature branch (
feature/your-feature-name) - Make your changes
- Run quality checks:
./scripts/dev-checks.sh - Submit a pull request
- Type annotations required (basedpyright strict mode)
- Code formatting with ruff
- Security checks with bandit
- Comprehensive test coverage
Run ./scripts/dev-checks.sh before committing.
- WCF Core Concepts - Framework architecture
- Configuration Guide - Environment configuration
- Runbook Documentation - Runbook usage
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
π Discord Server
- General help and installation support
- Development discussions
- Bug reports and feature requests
- Community showcase
- GitHub Issues - Report bugs and request features
- GitHub Discussions - General questions and community discussions