You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[DDD Domain Model](docs/ddd/ruvsense-domain-model.md)| RuvSense bounded contexts, aggregates, domain events, and ubiquitous language |
53
54
54
55
---
55
56
@@ -66,6 +67,8 @@ See people, breathing, and heartbeats through walls — using only WiFi signals
66
67
| 👥 |**Multi-Person**| Tracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP) |
67
68
| 🧱 |**Through-Wall**| WiFi passes through walls, furniture, and debris — works where cameras cannot |
68
69
| 🚑 |**Disaster Response**| Detects trapped survivors through rubble and classifies injury severity (START triage) |
| 🌐 |**Persistent Field Model**| Room eigenstructure via SVD enables RF tomography, drift detection, intention prediction, and adversarial detection ([ADR-030](docs/adr/ADR-030-ruvsense-persistent-field-model.md)) |
69
72
70
73
### Intelligence
71
74
@@ -76,6 +79,7 @@ The system learns on its own and gets smarter over time — no hand-tuning, no l
76
79
| 🧠 |**Self-Learning**| Teaches itself from raw WiFi data — no labeled training sets, no cameras needed to bootstrap ([ADR-024](docs/adr/ADR-024-contrastive-csi-embedding-model.md)) |
77
80
| 🎯 |**AI Signal Processing**| Attention networks, graph algorithms, and smart compression replace hand-tuned thresholds — adapts to each room automatically ([RuVector](https://github.com/ruvnet/ruvector)) |
78
81
| 🌍 |**Works Everywhere**| Train once, deploy in any room — adversarial domain generalization strips environment bias so models transfer across rooms, buildings, and hardware ([ADR-027](docs/adr/ADR-027-cross-environment-domain-generalization.md)) |
82
+
| 👁️ |**Cross-Viewpoint Fusion**| Learned attention fuses multiple viewpoints with geometric bias — reduces body occlusion and depth ambiguity that physics prevents any single sensor from solving ([ADR-031](docs/adr/ADR-031-ruview-sensing-first-rf-mode.md)) |
79
83
80
84
### Performance & Deployment
81
85
@@ -84,7 +88,7 @@ Fast enough for real-time use, small enough for edge devices, simple enough for
84
88
|| Feature | What It Means |
85
89
|---|---------|---------------|
86
90
| ⚡ |**Real-Time**| Analyzes WiFi signals in under 100 microseconds per frame — fast enough for live monitoring |
No training cameras required — the [Self-Learning system (ADR-024)](docs/adr/ADR-024-contrastive-csi-embedding-model.md) bootstraps from raw WiFi data alone. [MERIDIAN (ADR-027)](docs/adr/ADR-027-cross-environment-domain-generalization.md) ensures the model works in any room, not just the one it trained in.
@@ -366,6 +376,93 @@ cd dist/witness-bundle-ADR028-*/ && bash VERIFY.sh
A single WiFi receiver can track people, but has blind spots — limbs behind the torso are invisible, depth is ambiguous, and two people at similar range create overlapping signals. RuvSense solves this by coordinating multiple ESP32 nodes into a **multistatic mesh** where every node acts as both transmitter and receiver, creating N×(N-1) measurement links from N devices.
-**Channel-Hopping Firmware** — ESP32 firmware extended with hop table, timer-driven channel switching, NDP injection stub; NVS config for all TDM parameters; fully backward-compatible
1541
+
-**DDD Domain Model** — 6 bounded contexts, ubiquitous language, aggregate roots, domain events, full event bus specification
1542
+
-**9,000+ lines of new Rust code** across 17 modules with 300+ tests
1543
+
-**Security hardened** — Bounded buffers, NaN guards, no panics in public APIs, input validation at all boundaries
1544
+
1435
1545
### v3.0.0 — 2026-03-01
1436
1546
1437
1547
Major release: AETHER contrastive embedding model, AI signal processing backbone, cross-platform adapters, Docker Hub images, and comprehensive README overhaul.
0 commit comments