You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+61-7Lines changed: 61 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,9 +73,9 @@ The system learns on its own and gets smarter over time — no hand-tuning, no l
73
73
74
74
|| Feature | What It Means |
75
75
|---|---------|---------------|
76
-
| 🧠 |**Self-Learning**| Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap ([ADR-024](#self-learning-wifi-ai-adr-024)) |
77
-
| 🎯 |**AI Signal Processing**| Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically ([RuVector](#ai-backbone-ruvector)) |
78
-
| 🌍 |**Works Everywhere**| Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware ([ADR-027](#cross-environment-generalization-adr-027)) |
76
+
| 🧠 |**Self-Learning**| Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap ([ADR-024](docs/adr/ADR-024-contrastive-csi-embedding-model.md)) |
77
+
| 🎯 |**AI Signal Processing**| Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically ([RuVector](https://github.com/ruvnet/ruvector)) |
78
+
| 🌍 |**Works Everywhere**| Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware ([ADR-027](docs/adr/ADR-027-cross-environment-domain-generalization.md)) |
No training cameras required — the [Self-Learning system (ADR-024)](#self-learning-wifi-ai-adr-024) bootstraps from raw WiFi data alone. [MERIDIAN (ADR-027)](#cross-environment-generalization-adr-027) ensures the model works in any room, not just the one it trained in.
111
+
No training cameras required — the [Self-Learning system (ADR-024)](docs/adr/ADR-024-contrastive-csi-embedding-model.md) bootstraps from raw WiFi data alone. [MERIDIAN (ADR-027)](docs/adr/ADR-027-cross-environment-domain-generalization.md) ensures the model works in any room, not just the one it trained in.
112
112
113
113
---
114
114
@@ -277,6 +277,59 @@ See [`docs/adr/ADR-024-contrastive-csi-embedding-model.md`](docs/adr/ADR-024-con
277
277
278
278
</details>
279
279
280
+
<details>
281
+
<summary><aid="cross-environment-generalization-adr-027"></a><strong>🌍 Cross-Environment Generalization (ADR-027 — Project MERIDIAN)</strong> — Train once, deploy in any room without retraining</summary>
282
+
283
+
WiFi pose models trained in one room lose 40-70% accuracy when moved to another — even in the same building. The model memorizes room-specific multipath patterns instead of learning human motion. MERIDIAN forces the network to forget which room it's in while retaining everything about how people move.
284
+
285
+
**What it does in plain terms:**
286
+
- Models trained in Room A work in Room B, C, D — without any retraining or calibration data
287
+
- Handles different WiFi hardware (ESP32, Intel 5300, Atheros) with automatic chipset normalization
288
+
- Knows where the WiFi transmitters are positioned and compensates for layout differences
289
+
- Generates synthetic "virtual rooms" during training so the model sees thousands of environments
290
+
- At deployment, adapts to a new room in seconds using a handful of unlabeled WiFi frames
291
+
292
+
**Key Components**
293
+
294
+
| What | How it works | Why it matters |
295
+
|------|-------------|----------------|
296
+
|**Gradient Reversal Layer**| An adversarial classifier tries to guess which room the signal came from; the main network is trained to fool it | Forces the model to discard room-specific shortcuts |
297
+
|**Geometry Encoder (FiLM)**| Transmitter/receiver positions are Fourier-encoded and injected as scale+shift conditioning on every layer | The model knows *where* the hardware is, so it doesn't need to memorize layout |
298
+
|**Hardware Normalizer**| Resamples any chipset's CSI to a canonical 56-subcarrier format with standardized amplitude | Intel 5300 and ESP32 data look identical to the model |
299
+
|**Virtual Domain Augmentation**| Generates synthetic environments with random room scale, wall reflections, scatterers, and noise profiles | Training sees 1000s of rooms even with data from just 2-3 |
300
+
|**Rapid Adaptation (TTT)**| Contrastive test-time training with LoRA weight generation from a few unlabeled frames | Zero-shot deployment — the model self-tunes on arrival |
301
+
|**Cross-Domain Evaluator**| Leave-one-out evaluation across all training environments with per-environment PCK/OKS metrics | Proves generalization, not just memorization |
-`adapt()` returns `Result<_, AdaptError>` — no panics on bad input
326
+
- Atomic instance counter ensures unique weight initialization across threads
327
+
- Division-by-zero guards on all augmentation parameters
328
+
329
+
See [`docs/adr/ADR-027-cross-environment-domain-generalization.md`](docs/adr/ADR-027-cross-environment-domain-generalization.md) for full architectural details.
330
+
331
+
</details>
332
+
280
333
---
281
334
282
335
## 📦 Installation
@@ -512,7 +565,7 @@ The neural pipeline uses a graph transformer with cross-attention to map CSI fea
0 commit comments