https://hub.docker.com/repository/docker/alh477/kataforge
Copyright © 2026 DeMoD LLC. All rights reserved.
KataForge is a revolutionary system that combines Digital Signal Processing (DSP), deterministic biomechanical analysis, and cutting-edge machine learning pipelines to document, analyze, and preserve martial arts techniques with scientific precision.
Traditional martial arts documentation relies on subjective descriptions, low-quality video recordings, inconsistent analysis methods, and fading institutional knowledge. KataForge provides:
- Scientific precision through DSP and biomechanics
- Objectively measurable technique analysis
- Reproducible results with deterministic practices
- Permanent preservation of master techniques
- Video Analysis: Frame-by-frame motion extraction
- Audio Processing: Technique sound signature analysis
- Signal Filtering: Noise reduction and enhancement
- Feature Extraction: 33 landmark detection with MediaPipe
- Physics-Based Metrics: Force, power, velocity calculations
- Kinetic Chain Analysis: Energy transfer efficiency
- Joint Angle Measurement: Precision degree calculations
- Reproducible Results: Consistent measurements across sessions
- GraphSAGE Network: Technique classification and style analysis
- LSTM + Attention: Temporal pattern recognition
- Style Encoder: Coach-specific technique fingerprinting
- Real-Time Feedback: Instant performance evaluation
graph TD
A[Video Input] --> B[DSP Processing]
B --> C[Pose Extraction]
C --> D[Biomechanical Analysis]
D --> E[ML Classification]
E --> F[Technique Scoring]
F --> G[Feedback Generation]
G --> H[Visualization & Storage]
- 33 Landmark Detection: Full-body pose analysis
- Temporal Smoothing: Motion stabilization
- Multi-View Support: 2D/3D camera integration
- Real-Time Processing: <100ms latency
- Force Calculation: Newtonian physics modeling
- Power Output: Wattage measurements
- Velocity Tracking: Speed analysis
- Balance Metrics: Center of gravity tracking
- Technique Classification: 91% accuracy
- Style Recognition: Coach identification
- Error Detection: Form correction
- Progress Tracking: Improvement metrics
- Hands-Free Control: Voice command interface
- Real-Time Feedback: Audio coaching
- Multi-Language Support: Global accessibility
- Context Awareness: Smart command parsing
- Self-Hosted: Private dojo installations
- Cloud API: Scalable analysis service
- Edge Devices: Local processing
- Mobile Integration: Companion apps
- Nix Flakes: Reproducible environments
- Multi-GPU Support: ROCm, CUDA, Vulkan
- Type Safety: Pydantic validation
- Comprehensive Testing: 95% coverage
- Security: JWT, API keys, rate limiting
- Monitoring: Prometheus, OpenTelemetry
- Scalability: Kubernetes deployment
- Reliability: Health checks, error handling
- CLI: Typer + Rich interface
- API: FastAPI backend
- UI: Gradio web interface
- Voice: Hands-free interaction
- Master Documentation: Capture champion techniques
- Style Analysis: Compare fighting styles
- Historical Archive: Preserve martial arts history
- Lineage Tracking: Trace technique evolution
- Competition Preparation: Optimize techniques
- Training Optimization: Identify weaknesses
- Progress Tracking: Measure improvement
- Injury Prevention: Detect risky form
- Remote Coaching: Online technique analysis
- Automated Feedback: AI-powered coaching
- Curriculum Development: Technique libraries
- Student Assessment: Objective grading
- Biomechanics Research: Scientific studies
- Technique Innovation: New move development
- Cross-Style Analysis: Comparative studies
- Performance Benchmarking: Standardized metrics
-
Enter the development environment:
nix develop # CPU-only nix develop .#rocm # AMD ROCm GPUs (e.g., RX 7700S) nix develop .#cuda # NVIDIA CUDA GPUs nix develop .#vulkan # Intel / Vulkan GPUs (portable)
-
Validate GPU configuration:
kataforge system validate-gpu
-
Framework 16 quick setup (if applicable):
./scripts/framework16-quickstart.sh
KataForge is built around five core modular components:
- Preprocessing: Video normalization and pose extraction (MediaPipe)
- Biomechanics Engine: Physics-based analysis of force, power, velocity, and joint angles
- Machine Learning Pipeline: Technique assessment using GraphSAGE, LSTM, and attention mechanisms
- API Gateway: RESTful interface with FastAPI and authentication
- User Interface: Interactive real-time feedback via Gradio
graph TD
A[Video Input] --> B[DSP Processing]
B --> C[Pose Extraction]
C --> D[Biomechanical Analysis]
D --> E[ML Classification]
E --> F[Technique Scoring]
F --> G[Feedback Generation]
G --> H[Visualization & Storage]
- Multi-GPU support: AMD ROCm, NVIDIA CUDA, Intel Vulkan
- Automatic GPU detection and configuration
- MediaPipe integration for real-time extraction of 33 3D landmarks
- Biomechanical computations: force, power, velocity, joint angles
- Technique assessment models: GraphSAGE, LSTM, attention-based
- LLM integration: Ollama (default) and llama.cpp (Vulkan) for coaching feedback
- Production-grade: comprehensive error handling, logging, and security features
- Nix flakes for fully reproducible environments
- Multi-backend Docker images (CPU, ROCm, CUDA, Vulkan)
- Kubernetes-ready deployment configurations
- Terraform support for cloud infrastructure
- 42 unit tests with 95% code coverage
- Comprehensive CLI with 50+ commands (built with Typer and Rich)
- 33 Landmark Detection: Full-body pose analysis
- Temporal Smoothing: Motion stabilization
- Multi-View Support: 2D/3D camera integration
- Real-Time Processing: <100ms latency
- Force Calculation: Newtonian physics modeling
- Power Output: Wattage measurements
- Velocity Tracking: Speed analysis
- Balance Metrics: Center of gravity tracking
- Technique Classification: 91% accuracy
- Style Recognition: Coach identification
- Error Detection: Form correction
- Progress Tracking: Improvement metrics
- Hands-Free Control: Voice command interface
- Real-Time Feedback: Audio coaching
- Multi-Language Support: Global accessibility
- Context Awareness: Smart command parsing
- Self-Hosted: Private dojo installations
- Cloud API: Scalable analysis service
- Edge Devices: Local processing
- Mobile Integration: Companion apps
- Scientific Validation: Prove technique effectiveness through objective metrics
- Objective Measurement: Remove subjective bias from technique evaluation
- Progress Tracking: See real improvement with quantifiable data
- Competitive Edge: Optimize performance using data-driven insights
- Automated Analysis: Save time on technique evaluations with AI assistance
- Consistent Feedback: Standardized coaching based on objective metrics
- Remote Training: Online student analysis with video upload capabilities
- Technique Library: Build comprehensive databases of your fighting style
- Data-Driven Insights: Conduct scientific analysis of martial arts techniques
- Cross-Style Comparison: Objective metrics for comparing different fighting styles
- Biomechanical Studies: Detailed measurements of force, power, and movement
- Performance Benchmarks: Standardized testing protocols for martial arts research
- Knowledge Preservation: Document and preserve master techniques permanently
- Quality Control: Standardized training methodologies across locations
- Brand Differentiation: Scientific validation of your training methods
- Revenue Opportunities: Premium analysis services for members and students
# 1. Initialize the system
kataforge init --data-dir=~/kataforge_data
# 2. Extract pose data from video
kataforge extract-pose data/input.mp4 --output=analysis.json
# 3. Train models with GPU acceleration
kataforge train \
--coach=nagato \
--technique=roundhouse \
--epochs=100 \
--device=cuda
# 4. Analyze a technique with AI feedback
kataforge analyze \
--video=test.mp4 \
--llm-backend=ollama \
--show-corrections \
--verbosekataforge analyze \
--source=webcam \
--llm-backend=ollama \
--show-corrections# Launch Gradio UI
kataforge ui
# Or use Nix outputs
nix run .#ui # Gradio UI (port 7860)
nix run .#server # API server (port 8000)# Run test suite
poetry run pytest tests/
# Generate coverage report
poetry run coverage report
# Validate configuration
poetry run python scripts/config_validator.py
# Code formatting and linting
black kataforge/
ruff check kataforge/Test Coverage
- 42 unit tests (95% coverage)
- 18 integration tests (CLI + API)
- 5 end-to-end workflow scenarios
The documentation has been comprehensively updated to reflect the current state of the codebase:
- Configuration Guide - Complete configuration reference with all settings
- API Reference - Comprehensive API documentation with examples
- CLI Reference - Complete CLI documentation with all commands
- Voice System - Full voice system documentation
- Voice Implementation - Updated to reflect current implementation status
- Configuration - Updated with actual configuration structure
- README - Updated with new documentation references
- ✅ Configuration system
- ✅ API reference
- ✅ CLI reference
- ✅ Voice system
- ✅ GPU setup
- ✅ System architecture
- ✅ Usage examples
- ✅ Troubleshooting guides
nix build .#docker-cpu # CPU-only
nix build .#docker-rocm # AMD ROCm
nix build .#docker-cuda # NVIDIA CUDA
nix build .#docker-vulkan # Intel Vulkan
nix build .#docker-gradio-cpu # UI only (CPU)
nix build .#docker-full-cpu # Full stack (API + UI + LLM)
nix build .#docker-full-rocm # Full stack with ROCmLoad and run example:
docker load < result
docker run -p 8000:8000 -p 7860:7860 kataforge-full:cpudocker-compose up -d
docker-compose logs -f kataforge
docker-compose down- CPU: AMD Ryzen 9 7840HS or Intel Core i7-12700H (8+ cores recommended)
- GPU: AMD RX 7700S (16 GB VRAM), NVIDIA RTX 3090, or Intel Arc A770
- RAM: 32 GB DDR5 (64 GB recommended for training)
- Storage: 500 GB NVMe SSD (1 TB recommended)
- Operating System: Ubuntu 22.04+, NixOS 23.11+, or compatible Linux distribution
- GPU Drivers:
- AMD: ROCm 6.0+
- NVIDIA: CUDA 12.0+, cuDNN 8.9+
- Intel: Vulkan 1.3+
- Python: 3.11+ (managed via Nix)
Training times on high-end GPUs (AMD RX 7700S / NVIDIA RTX 3090):
| Model | Training Time | Parameters | VRAM Usage |
|---|---|---|---|
| GraphSAGE | 25–30 hours | 2.1M | 8–10 GB |
| Form Assessor | 33–40 hours | 3.5M | 10–12 GB |
| Style Encoder | 17–20 hours | 1.8M | 6–8 GB |
Total training time for all models: approximately 3–4 days
Inference performance:
- Real-time pose extraction: 30+ FPS (GPU)
- Technique classification: <50 ms per frame
- Biomechanics calculation: <10 ms per frame
Full technical documentation: https://docs.demod.llc/kataforge
Configuration:
- Configuration Guide – Complete configuration reference with all settings
- GPU Setup – GPU configuration (ROCm/CUDA/Vulkan)
- Framework 16 Setup – Framework 16 specific optimizations
API Reference:
- API Reference – Complete API documentation with examples
- CLI Reference – Complete CLI documentation with all commands
- Deployment Guide – Production deployment instructions
Voice System:
- Voice System – Complete voice system documentation
- Voice Implementation – Implementation status and roadmap
Training & Usage:
- Training Guide – Model training best practices
- Quick Start – Rapid setup and training
- Integration Summary – Technical integration details
System Information:
- System Overview – Architecture and components
- System Status – Current system capabilities
- Security Implementation – Security features and practices
# Core
export DOJO_ENVIRONMENT=production
export DOJO_LOG_FORMAT=json
export DOJO_DATA_DIR=/kataforge_data
# API
export DOJO_API_HOST=0.0.0.0
export DOJO_API_PORT=8000
# LLM
export DOJO_LLM_BACKEND=ollama # or llamacpp
export DOJO_VISION_MODEL=llava:7b
export DOJO_TEXT_MODEL=mistral:7b
# GPU (auto-detected; override if needed)
export DOJO_DEVICE=cuda # cpu / cuda / rocm / vulkan
export HSA_OVERRIDE_GFX_VERSION=11.0.2 # ROCm only
export PYTORCH_ROCM_ARCH=gfx1100 # ROCm onlyCreate ~/.config/kataforge/config.yaml:
data_dir: /home/user/kataforge_data
log_level: INFO
api:
host: 0.0.0.0
port: 8000
workers: 4
llm:
backend: ollama
vision_model: llava:7b
text_model: mistral:7b
gpu:
device: auto
memory_fraction: 0.8git clone https://github.com/demod-llc/kataforge.git
cd kataforge
nix develop
# Development server with hot reload
kataforge server --reload
# Gradio UI (shareable link)
kataforge ui --shareAvailable Nix outputs:
nix run .#default # Main CLI
nix run .#server # API server
nix run .#ui # Gradio UI
nix develop .#rocm # ROCm shell
nix develop .#cuda # CUDA shell
nix develop .#vulkan # Vulkan shellKataForge is source-available software (not OSI-approved open source).
It is released under the KataForge License (based on Elastic License v2 / ELv2).
This license permits:
- Private self-hosting on your own hardware/servers for personal, dojo, coaching, research, or small-group commercial use
- Modification, bug fixes, and technique additions (with verified data ownership)
- Redistribution of modifications (with copyright and license notices preserved)
It prohibits:
- Offering KataForge (or modified versions) as a hosted, managed, or SaaS service to third parties
- Circumventing any license protections or removing notices
Full license text: LICENSE
For commercial hosted offerings, integrations, exceptions, or questions, contact: [email protected]
Contributions (bug fixes, GPU improvements, technique additions with verified data ownership) are welcome via pull requests.
GPU not detected
rocm-smi # AMD
nvidia-smi # NVIDIA
vulkaninfo # Vulkan
kataforge system validate-gpuOut-of-memory errors
kataforge train --batch-size=8
export DOJO_GPU_MEMORY_FRACTION=0.7Poetry/Nix conflicts
nix flake update
nix develop --refreshSee docs/TROUBLESHOOTING.md or contact [email protected] for additional assistance.
- General Support: [email protected]
- Machine Learning / Technical: [email protected]
- Sales & Licensing: [email protected]
- Documentation: https://docs.demod.llc/kataforge
Built for martial arts preservation and AI-assisted coaching.