Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 5fa61ba

Browse files
authored
feat: adaptive CSI classifier with signal smoothing pipeline (ADR-048) (ruvnet#144)
Add environment-tuned activity classification that learns from labeled ESP32 CSI recordings, replacing brittle static thresholds. - Adaptive classifier: 15-feature logistic regression trained from JSONL recordings (variance, motion band, subcarrier stats: skew, kurtosis, entropy, IQR). Trains in <1s, persists as JSON, auto-loads on restart. - Three-stage signal smoothing: adaptive baseline subtraction (α=0.003), EMA + trimmed-mean median filter (21-frame window), hysteresis debounce (4 frames). Motion classification now stable across seconds, not frames. - Vital signs stabilization: outlier rejection (±8 BPM HR, ±2 BPM BR), trimmed mean, dead-band (±2 BPM HR), EMA α=0.02. HR holds steady for 10+ seconds instead of jumping 50 BPM every frame. - Observatory auto-detect: always probes /health on startup, connects WebSocket to live ESP32 data automatically. - New API endpoints: POST /api/v1/adaptive/train, GET /adaptive/status, POST /adaptive/unload for runtime model management. - Updated user guide with Observatory, adaptive classifier tutorial, signal smoothing docs, and new troubleshooting entries.
1 parent f771cf8 commit 5fa61ba

6 files changed

Lines changed: 2435 additions & 49 deletions

File tree

Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,140 @@
1+
# ADR-048: Adaptive CSI Activity Classifier
2+
3+
| Field | Value |
4+
|-------|-------|
5+
| Status | Accepted |
6+
| Date | 2026-03-05 |
7+
| Deciders | ruv |
8+
| Depends on | ADR-024 (AETHER Embeddings), ADR-039 (Edge Processing), ADR-045 (AMOLED Display) |
9+
10+
## Context
11+
12+
WiFi-based activity classification using ESP32 Channel State Information (CSI) relies on hand-tuned thresholds to distinguish between activity states (absent, present_still, present_moving, active). These static thresholds are brittle — they don't account for:
13+
14+
- **Environment-specific signal patterns**: Room geometry, furniture, wall materials, and ESP32 placement all affect how CSI signals respond to human activity.
15+
- **Temporal noise characteristics**: Real ESP32 CSI data at ~10 FPS has significant frame-to-frame jitter that causes classification to jump between states.
16+
- **Vital signs estimation noise**: Heart rate and breathing rate estimates from Goertzel filter banks produce large swings (50+ BPM frame-to-frame) at low confidence levels.
17+
18+
The existing threshold-based approach produces noisy, unstable classifications that degrade the user experience in the Observatory visualization and the main dashboard.
19+
20+
## Decision
21+
22+
### 1. Three-Stage Signal Smoothing Pipeline
23+
24+
All CSI-derived metrics pass through a three-stage pipeline before reaching the UI:
25+
26+
#### Stage 1: Adaptive Baseline Subtraction
27+
- EMA with α=0.003 (~30s time constant) tracks the "quiet room" noise floor
28+
- Only updates during low-motion periods to avoid inflating baseline during activity
29+
- 50-frame warm-up period for initial baseline learning
30+
- Subtracts 70% of baseline from raw motion score to remove environmental drift
31+
32+
#### Stage 2: EMA + Median Filtering
33+
- **Motion score**: Blended from 4 signals (temporal diff 40%, variance 20%, motion band power 25%, change points 15%), then EMA-smoothed with α=0.15
34+
- **Vital signs**: 21-frame sliding window → trimmed mean (drop top/bottom 25%) → EMA with α=0.02 (~5s time constant)
35+
- **Dead-band**: HR won't update unless trimmed mean differs by >2 BPM; BR needs >0.5 BPM
36+
- **Outlier rejection**: HR jumps >8 BPM/frame and BR jumps >2 BPM/frame are discarded
37+
38+
#### Stage 3: Hysteresis Debounce
39+
- Activity state transitions require 4 consecutive frames (~0.4s) of agreement before committing
40+
- Prevents rapid flickering between states
41+
- Independent candidate tracking resets on new direction changes
42+
43+
### 2. Adaptive Classifier Module (`adaptive_classifier.rs`)
44+
45+
A Rust-native environment-tuned classifier that learns from labeled JSONL recordings:
46+
47+
#### Feature Extraction (15 features)
48+
| # | Feature | Source | Discriminative Power |
49+
|---|---------|--------|---------------------|
50+
| 0 | variance | Server | Medium — temporal CSI spread |
51+
| 1 | motion_band_power | Server | Medium — high-frequency subcarrier energy |
52+
| 2 | breathing_band_power | Server | Low — respiratory band energy |
53+
| 3 | spectral_power | Server | Low — mean squared amplitude |
54+
| 4 | dominant_freq_hz | Server | Low — peak subcarrier index |
55+
| 5 | change_points | Server | Medium — threshold crossing count |
56+
| 6 | mean_rssi | Server | Low — received signal strength |
57+
| 7 | amp_mean | Subcarrier | Medium — mean amplitude across 56 subcarriers |
58+
| 8 | amp_std | Subcarrier | **High** — amplitude spread (motion increases spread) |
59+
| 9 | amp_skew | Subcarrier | Medium — asymmetry of amplitude distribution |
60+
| 10 | amp_kurt | Subcarrier | **High** — peakedness (presence creates peaks) |
61+
| 11 | amp_iqr | Subcarrier | Medium — inter-quartile range |
62+
| 12 | amp_entropy | Subcarrier | **High** — spectral entropy (motion increases disorder) |
63+
| 13 | amp_max | Subcarrier | Medium — peak amplitude value |
64+
| 14 | amp_range | Subcarrier | Medium — amplitude dynamic range |
65+
66+
#### Training Algorithm
67+
- **Multiclass logistic regression** with softmax output
68+
- **Mini-batch SGD** (batch size 32, 200 epochs, linear learning rate decay)
69+
- **Z-score normalisation** using global mean/stddev computed from all training data
70+
- Per-class statistics (mean, stddev) stored for Mahalanobis distance fallback
71+
- Deterministic shuffling (LCG PRNG, seed 42) for reproducible results
72+
73+
#### Training Data Pipeline
74+
1. Record labeled CSI sessions via `POST /api/v1/recording/start {"id":"train_<label>"}`
75+
2. Filename-based label assignment: `*empty*`→absent, `*still*`→present_still, `*walking*`→present_moving, `*active*`→active
76+
3. Train via `POST /api/v1/adaptive/train`
77+
4. Model saved to `data/adaptive_model.json`, auto-loaded on server restart
78+
79+
#### Inference Pipeline
80+
1. Extract 15-feature vector from current CSI frame
81+
2. Z-score normalise using stored global mean/stddev
82+
3. Compute softmax probabilities across 4 classes
83+
4. Blend adaptive model confidence (70%) with smoothed threshold confidence (30%)
84+
5. Override classification only when adaptive model is loaded
85+
86+
### 3. API Endpoints
87+
88+
| Method | Endpoint | Description |
89+
|--------|----------|-------------|
90+
| POST | `/api/v1/adaptive/train` | Train classifier from `train_*` recordings |
91+
| GET | `/api/v1/adaptive/status` | Check model status, accuracy, class stats |
92+
| POST | `/api/v1/adaptive/unload` | Revert to threshold-based classification |
93+
| POST | `/api/v1/recording/start` | Start recording CSI frames (JSONL) |
94+
| POST | `/api/v1/recording/stop` | Stop recording |
95+
| GET | `/api/v1/recording/list` | List available recordings |
96+
97+
### 4. Vital Signs Smoothing
98+
99+
| Parameter | Value | Rationale |
100+
|-----------|-------|-----------|
101+
| Median window | 21 frames | ~2s of history, robust to transients |
102+
| Aggregation | Trimmed mean (middle 50%) | More stable than pure median, less noisy than raw mean |
103+
| EMA alpha | 0.02 | ~5s time constant — readings change very slowly |
104+
| HR dead-band | ±2 BPM | Prevents display creep from micro-fluctuations |
105+
| BR dead-band | ±0.5 BPM | Same for breathing rate |
106+
| HR max jump | 8 BPM/frame | Outlier rejection threshold |
107+
| BR max jump | 2 BPM/frame | Outlier rejection threshold |
108+
109+
## Consequences
110+
111+
### Benefits
112+
- **Stable UI**: Vital signs readings hold steady for 5-10+ seconds instead of jumping every frame
113+
- **Environment adaptation**: Classifier learns the specific room's signal characteristics
114+
- **Graceful fallback**: If no adaptive model is loaded, threshold-based classification with smoothing still works
115+
- **No external dependencies**: Pure Rust implementation, no Python/ML frameworks needed
116+
- **Fast training**: 3,000+ frames train in <1 second on commodity hardware
117+
- **Portable model**: JSON serialisation, loadable on any platform
118+
119+
### Limitations
120+
- **Single-link**: With one ESP32, the feature space is limited. Multi-AP setups (ADR-029) would dramatically improve separability.
121+
- **No temporal features**: Current frame-level classification doesn't use sequence models (LSTM/Transformer). Could be added later.
122+
- **Label quality**: Training accuracy depends heavily on recording quality (distinct activities, actual room vacancy for "empty").
123+
- **Linear classifier**: Logistic regression may underfit non-linear decision boundaries. Could upgrade to 2-layer MLP if needed.
124+
125+
### Future Work
126+
- **Online learning**: Continuously update model weights from user corrections
127+
- **Sequence models**: Use sliding window of N frames as input for temporal pattern recognition
128+
- **Contrastive pretraining**: Leverage ADR-024 AETHER embeddings for self-supervised feature learning
129+
- **Multi-AP fusion**: Use ADR-029 multistatic sensing for richer feature space
130+
- **Edge deployment**: Export learned thresholds to ESP32 firmware (ADR-039 Tier 2) for on-device classification
131+
132+
## Files
133+
134+
| File | Purpose |
135+
|------|---------|
136+
| `crates/wifi-densepose-sensing-server/src/adaptive_classifier.rs` | Adaptive classifier module (feature extraction, training, inference) |
137+
| `crates/wifi-densepose-sensing-server/src/main.rs` | Smoothing pipeline, API endpoints, integration |
138+
| `ui/observatory/js/hud-controller.js` | UI-side lerp smoothing (4% per frame) |
139+
| `data/adaptive_model.json` | Trained model (auto-created by training endpoint) |
140+
| `data/recordings/train_*.jsonl` | Labeled training recordings |

0 commit comments

Comments
 (0)