|
| 1 | +# ADR-036: RVF Model Training Pipeline & UI Integration |
| 2 | + |
| 3 | +## Status |
| 4 | +Proposed |
| 5 | + |
| 6 | +## Date |
| 7 | +2026-03-02 |
| 8 | + |
| 9 | +## Context |
| 10 | + |
| 11 | +The wifi-densepose system currently operates in **signal-derived** mode — `derive_pose_from_sensing()` maps aggregate CSI features (motion power, breathing rate, variance) to keypoint positions using deterministic math. This gives whole-body presence and gross motion but cannot track individual limbs. |
| 12 | + |
| 13 | +The infrastructure for **model inference** mode exists but is disconnected: |
| 14 | + |
| 15 | +1. **RVF container format** (`rvf_container.rs`, 1,102 lines) — a 64-byte-aligned binary format supporting model weights (`SEG_VEC`), metadata (`SEG_MANIFEST`), quantization (`SEG_QUANT`), LoRA profiles (`SEG_LORA`), contrastive embeddings (`SEG_EMBED`), and witness audit trails (`SEG_WITNESS`). Builder and reader are fully implemented with CRC32 integrity checks. |
| 16 | + |
| 17 | +2. **Training crate ** ( `wifi-densepose-train`) — AdamW optimizer, [email protected]/OKS metrics, LR scheduling with warmup, early stopping, CSV logging, and checkpoint export. Supports `CsiDataset` trait with planned MM-Fi (114→56 subcarrier interpolation) and Wi-Pose (30→56 zero-pad) loaders per ADR-015. |
| 18 | + |
| 19 | +3. **NN inference crate** (`wifi-densepose-nn`) — ONNX Runtime backend with CPU/GPU support, dynamic tensor shapes, thread-safe `OnnxBackend` wrapper, model info inspection, and warmup. |
| 20 | + |
| 21 | +4. **Sensing server CLI** (`--model <path>`, `--train`, `--pretrain`, `--embed`) — flags exist for model loading, training mode, and embedding extraction, but the end-to-end path from raw CSI → trained `.rvf` → live inference is not wired together. |
| 22 | + |
| 23 | +5. **UI gaps** — No model management, training progress visualization, LoRA profile switching, or embedding inspection. The Settings panel lacks model configuration. The Live Demo has no way to load a trained model or compare signal-derived vs model-inference output side-by-side. |
| 24 | + |
| 25 | +### What users need |
| 26 | + |
| 27 | +- A way to **collect labeled CSI data** from their own environment (self-supervised or teacher-student from camera). |
| 28 | +- A way to **train an .rvf model** from collected data without leaving the UI. |
| 29 | +- A way to **load and switch models** in the live demo, seeing the quality improvement. |
| 30 | +- Visibility into **training progress** (loss curves, validation PCK, early stopping). |
| 31 | +- **Environment adaptation** via LoRA profiles (office → home → warehouse) without full retraining. |
| 32 | + |
| 33 | +## Decision |
| 34 | + |
| 35 | +### Phase 1: Data Collection & Self-Supervised Pretraining |
| 36 | + |
| 37 | +#### 1.1 CSI Recording API |
| 38 | +Add REST endpoints to the sensing server: |
| 39 | +``` |
| 40 | +POST /api/v1/recording/start { duration_secs, label?, session_name } |
| 41 | +POST /api/v1/recording/stop |
| 42 | +GET /api/v1/recording/list |
| 43 | +GET /api/v1/recording/download/:id |
| 44 | +DELETE /api/v1/recording/:id |
| 45 | +``` |
| 46 | +- Records raw CSI frames + extracted features to `.csi.jsonl` files. |
| 47 | +- Optional camera-based label overlay via teacher model (Detectron2/MediaPipe on client). |
| 48 | +- Each recording session tagged with environment metadata (room dimensions, node positions, AP count). |
| 49 | + |
| 50 | +#### 1.2 Contrastive Pretraining (ADR-024 Phase 1) |
| 51 | +- Self-supervised NT-Xent loss learns a 128-dim CSI embedding without pose labels. |
| 52 | +- Positive pairs: adjacent frames from same person; negatives: different sessions/rooms. |
| 53 | +- VICReg regularization prevents embedding collapse. |
| 54 | +- Output: `.rvf` container with `SEG_EMBED` + `SEG_VEC` segments. |
| 55 | +- Training triggered via `POST /api/v1/train/pretrain { dataset_ids[], epochs, lr }`. |
| 56 | + |
| 57 | +### Phase 2: Supervised Training Pipeline |
| 58 | + |
| 59 | +#### 2.1 Dataset Integration |
| 60 | +- **MM-Fi loader**: Parse HDF5 files, 114→56 subcarrier interpolation via `ruvector-solver` sparse least-squares. |
| 61 | +- **Wi-Pose loader**: Parse .mat files, 30→56 zero-padding with Hann window smoothing. |
| 62 | +- **Self-collected**: `.csi.jsonl` from Phase 1 recording + camera-generated labels. |
| 63 | +- All datasets implement `CsiDataset` trait and produce `(amplitude[B,T*links,56], phase[B,T*links,56], keypoints[B,17,2], visibility[B,17])`. |
| 64 | + |
| 65 | +#### 2.2 Training API |
| 66 | +``` |
| 67 | +POST /api/v1/train/start { |
| 68 | + dataset_ids: string[], |
| 69 | + config: { |
| 70 | + epochs: 100, |
| 71 | + batch_size: 32, |
| 72 | + learning_rate: 3e-4, |
| 73 | + weight_decay: 1e-4, |
| 74 | + early_stopping_patience: 15, |
| 75 | + warmup_epochs: 5, |
| 76 | + pretrained_rvf?: string, // Base model for fine-tuning |
| 77 | + lora_profile?: string, // Environment-specific LoRA |
| 78 | + } |
| 79 | +} |
| 80 | +POST /api/v1/train/stop |
| 81 | +GET /api/v1/train/status // { epoch, train_loss, val_pck, val_oks, lr, eta_secs } |
| 82 | +WS /ws/train/progress // Real-time streaming of training metrics |
| 83 | +``` |
| 84 | + |
| 85 | +#### 2.3 RVF Export |
| 86 | +On training completion: |
| 87 | +- Best checkpoint exported as `.rvf` with `SEG_VEC` (weights), `SEG_MANIFEST` (metadata), `SEG_WITNESS` (training hash + final metrics), and optional `SEG_QUANT` (INT8 quantization). |
| 88 | +- Stored in `data/models/` directory, indexed by model ID. |
| 89 | +- `GET /api/v1/models` lists available models; `POST /api/v1/models/load { model_id }` hot-loads into inference. |
| 90 | + |
| 91 | +### Phase 3: LoRA Environment Adaptation |
| 92 | + |
| 93 | +#### 3.1 LoRA Fine-Tuning |
| 94 | +- Given a base `.rvf` model, fine-tune only LoRA adapter weights (rank 4-16) on environment-specific recordings. |
| 95 | +- 5-10 minutes of labeled data from new environment suffices. |
| 96 | +- New LoRA profile appended to existing `.rvf` via `SEG_LORA` segment. |
| 97 | +- `POST /api/v1/train/lora { base_model_id, dataset_ids[], profile_name, rank: 8, epochs: 20 }`. |
| 98 | + |
| 99 | +#### 3.2 Profile Switching |
| 100 | +- `POST /api/v1/models/lora/activate { model_id, profile_name }` — hot-swap LoRA weights without reloading base model. |
| 101 | +- UI dropdown lists available profiles per loaded model. |
| 102 | + |
| 103 | +### Phase 4: UI Integration |
| 104 | + |
| 105 | +#### 4.1 Model Management Panel (new: `ui/components/ModelPanel.js`) |
| 106 | +- **Model Library**: List loaded and available `.rvf` models with metadata (version, dataset, PCK score, size, created date). |
| 107 | +- **Model Inspector**: Show RVF segment breakdown — weight count, quantization type, LoRA profiles, embedding config, witness hash. |
| 108 | +- **Load/Unload**: One-click model loading with progress bar. |
| 109 | +- **Compare**: Side-by-side signal-derived vs model-inference toggle in Live Demo. |
| 110 | + |
| 111 | +#### 4.2 Training Dashboard (new: `ui/components/TrainingPanel.js`) |
| 112 | +- **Recording Controls**: Start/stop CSI recording, session list with duration and frame counts. |
| 113 | +- **Training Progress **: Real-time loss curve (train loss, val loss) and metric charts ( [email protected], OKS) via WebSocket streaming. |
| 114 | +- **Epoch Table**: Scrollable table of per-epoch metrics with best-epoch highlighting. |
| 115 | +- **Early Stopping Indicator**: Visual countdown of patience remaining. |
| 116 | +- **Export Button**: Download trained `.rvf` from browser. |
| 117 | + |
| 118 | +#### 4.3 Live Demo Enhancements |
| 119 | +- **Model Selector**: Dropdown in toolbar to switch between signal-derived and loaded `.rvf` models. |
| 120 | +- **LoRA Profile Selector**: Sub-dropdown showing environment profiles for the active model. |
| 121 | +- **Confidence Heatmap Overlay**: Per-keypoint confidence visualization when model is loaded (toggle in render mode dropdown). |
| 122 | +- **Pose Trail**: Ghosted keypoint history showing last N frames of motion trajectory. |
| 123 | +- **A/B Split View**: Left half signal-derived, right half model-inference for quality comparison. |
| 124 | + |
| 125 | +#### 4.4 Settings Panel Extensions |
| 126 | +- **Model section**: Default model path, auto-load on startup, GPU/CPU toggle, inference threads. |
| 127 | +- **Training section**: Default hyperparameters, checkpoint directory, auto-export on completion. |
| 128 | +- **Recording section**: Default recording directory, max duration, auto-label with camera. |
| 129 | + |
| 130 | +#### 4.5 Dark Mode |
| 131 | +All new panels follow the dark mode established in ADR-035 (`#0d1117` backgrounds, `#e0e0e0` text, translucent dark panels with colored accents). |
| 132 | + |
| 133 | +### Phase 5: Inference Pipeline Wiring |
| 134 | + |
| 135 | +#### 5.1 Model-Inference Pose Path |
| 136 | +When a `.rvf` model is loaded: |
| 137 | +1. CSI frame arrives (UDP or simulated). |
| 138 | +2. Extract amplitude + phase tensors from subcarrier data. |
| 139 | +3. Feed through ONNX session: `input[1, T*links, 56]` → `output[1, 17, 4]` (x, y, z, conf). |
| 140 | +4. Apply Kalman smoothing from `pose_tracker.rs`. |
| 141 | +5. Broadcast via WebSocket with `pose_source: "model_inference"`. |
| 142 | +6. UI Estimation Mode badge switches from green "SIGNAL-DERIVED" to blue "MODEL INFERENCE". |
| 143 | + |
| 144 | +#### 5.2 Progressive Loading (ADR-031 Layer A/B/C) |
| 145 | +- **Layer A** (instant): Signal-derived pose starts immediately. |
| 146 | +- **Layer B** (5-10s): Contrastive embeddings loaded, HNSW index warm. |
| 147 | +- **Layer C** (30-60s): Full pose model loaded, inference active. |
| 148 | +- Transitions seamlessly; UI badge updates automatically. |
| 149 | + |
| 150 | +## Consequences |
| 151 | + |
| 152 | +### Positive |
| 153 | +- Users can train a model on **their own environment** without external tools or Python dependencies. |
| 154 | +- LoRA profiles mean a single base model adapts to multiple rooms in minutes, not hours. |
| 155 | +- Training progress is visible in real-time — no black-box waiting. |
| 156 | +- A/B comparison lets users see the quality jump from signal-derived to model-inference. |
| 157 | +- RVF container bundles everything (weights, metadata, LoRA, witness) in one portable file. |
| 158 | +- Self-supervised pretraining requires no labels — just leave ESP32s running. |
| 159 | +- Progressive loading means the UI is never "loading..." — signal-derived kicks in immediately. |
| 160 | + |
| 161 | +### Negative |
| 162 | +- Training requires significant compute: GPU recommended for supervised training (CPU possible but 10-50x slower). |
| 163 | +- MM-Fi and Wi-Pose datasets must be downloaded separately (10-50 GB each) — cannot be bundled. |
| 164 | +- LoRA rank must be tuned per environment; too low loses expressiveness, too high overfits. |
| 165 | +- ONNX Runtime adds ~50 MB to the binary size when GPU support is enabled. |
| 166 | +- Real-time inference at 10 FPS requires ~10ms per frame — tight budget on CPU. |
| 167 | +- Teacher-student labeling (camera → pose labels → CSI training) requires camera access, which may conflict with the privacy-first premise. |
| 168 | + |
| 169 | +### Mitigations |
| 170 | +- Provide pre-trained base `.rvf` model downloadable from releases (trained on MM-Fi + Wi-Pose). |
| 171 | +- INT8 quantization (`SEG_QUANT`) reduces model size 4x and speeds inference ~2x on CPU. |
| 172 | +- Camera-based labeling is **optional** — self-supervised pretraining works without camera. |
| 173 | +- Training API validates VRAM availability before starting GPU training; falls back to CPU with warning. |
| 174 | + |
| 175 | +## Implementation Order |
| 176 | + |
| 177 | +| Phase | Effort | Dependencies | Priority | |
| 178 | +|-------|--------|-------------|----------| |
| 179 | +| 1.1 CSI Recording API | 2-3 days | sensing server | High | |
| 180 | +| 1.2 Contrastive Pretraining | 3-5 days | ADR-024, recording API | High | |
| 181 | +| 2.1 Dataset Integration | 3-5 days | ADR-015, CsiDataset trait | High | |
| 182 | +| 2.2 Training API | 2-3 days | training crate, dataset loaders | High | |
| 183 | +| 2.3 RVF Export | 1-2 days | RvfBuilder | Medium | |
| 184 | +| 3.1 LoRA Fine-Tuning | 3-5 days | base trained model | Medium | |
| 185 | +| 3.2 Profile Switching | 1 day | LoRA in RVF | Medium | |
| 186 | +| 4.1 Model Panel UI | 2-3 days | models API | High | |
| 187 | +| 4.2 Training Dashboard UI | 3-4 days | training API + WS | High | |
| 188 | +| 4.3 Live Demo Enhancements | 2-3 days | model loading | Medium | |
| 189 | +| 4.4 Settings Extensions | 1 day | model/training APIs | Low | |
| 190 | +| 4.5 Dark Mode | 0.5 days | new panels | Low | |
| 191 | +| 5.1 Inference Wiring | 3-5 days | ONNX backend, pose tracker | High | |
| 192 | +| 5.2 Progressive Loading | 2-3 days | ADR-031 | Medium | |
| 193 | + |
| 194 | +**Total estimate: 4-6 weeks** (phases can overlap; 1+2 parallel with 4). |
| 195 | + |
| 196 | +## Files to Create/Modify |
| 197 | + |
| 198 | +### New Files |
| 199 | +- `ui/components/ModelPanel.js` — Model library, inspector, load/unload controls |
| 200 | +- `ui/components/TrainingPanel.js` — Recording controls, training progress, metric charts |
| 201 | +- `rust-port/.../sensing-server/src/recording.rs` — CSI recording API handlers |
| 202 | +- `rust-port/.../sensing-server/src/training_api.rs` — Training API handlers + WS progress stream |
| 203 | +- `rust-port/.../sensing-server/src/model_manager.rs` — Model loading, hot-swap, 32LoRA activation |
| 204 | +- `data/models/` — Default model storage directory |
| 205 | + |
| 206 | +### Modified Files |
| 207 | +- `rust-port/.../sensing-server/src/main.rs` — Wire recording, training, and model APIs |
| 208 | +- `rust-port/.../train/src/trainer.rs` — Add WebSocket progress callback, LoRA training mode |
| 209 | +- `rust-port/.../train/src/dataset.rs` — MM-Fi and Wi-Pose dataset loaders |
| 210 | +- `rust-port/.../nn/src/onnx.rs` — LoRA weight injection, INT8 quantization support |
| 211 | +- `ui/components/LiveDemoTab.js` — Model selector, LoRA dropdown, A/B spsplit view |
| 212 | +- `ui/components/SettingsPanel.js` — Model and training configuration sections |
| 213 | +- `ui/components/PoseDetectionCanvas.js` — Pose trail rendering, confidence heatmap overlay |
| 214 | +- `ui/services/pose.service.js` — Model-inference keypoint processing |
| 215 | +- `ui/index.html` — Add Training tabhee |
| 216 | +- `ui/style.css` — Styles for new panels |
| 217 | + |
| 218 | +## References |
| 219 | +- ADR-015: MM-Fi + Wi-Pose training datasets |
| 220 | +- ADR-016: RuVector training pipeline integration |
| 221 | +- ADR-024: Project AETHER — contrastive CSI embedding model |
| 222 | +- ADR-029: RuvSense multistatic sensing mode |
| 223 | +- ADR-031: RuView sensing-first RF mode (progressive loading) |
| 224 | +- ADR-035: Live sensing UI accuracy & data source transparency |
| 225 | +- Issue: https://github.com/ruvnet/wifi-densepose/issues/92 |
| 226 | +- RVF format: `crates/wifi-densepose-sensing-server/src/rvf_container.rs` |
| 227 | +- Training crate: `crates/wifi-densepose-train/src/trainer.rs` |
| 228 | +- NN inference: `crates/wifi-densepose-nn/src/onnx.rs` |
0 commit comments