Neuro-Ensemble Distillation (NED): A Cortical Labs–Inspired Framework for Stabilizing Biocomputing with Cultured Neurons
Paper: Neuro-Ensemble Distillation: A Cross-Domain Framework for Stabilizing Biocomputing with Cultured Neurons
Author: Stefan Beierle (Independent Researcher)
Zenodo DOI: https://zenodo.org/records/17259126
This repository contains the supplementary material and code for the NED paper.
NED is a thought experiment bridging biological learning substrates (cultured neurons) with classical ensemble learning and policy distillation from machine learning.
The framework addresses the stability challenge of cultured-neuron systems by:
- Training multiple cultures in parallel.
- Averaging and aligning extracted policies.
- Distilling a robust Consensus Policy.
- Replaying it into new cultures or neuromorphic silicon.
NED_Paper_v1.0.pdf→ Full text of the paper.
ned_pseudocode.py→ Conceptual NED pipeline (Stimulate → Record → Extract → Align → Distill → Replay).simulation_basic.py→ Runs a simplified LIF-based simulation and exports consensus_model.json.heatmaps.py→ Visualizes policy noise reduction (individual cultures vs. consensus).learning_curves.py→ Demonstrates acceleration through replay.consensus_model.json→ Example Consensus Policy IR with metadata.
heatmaps.png→ Policy heatmaps (Bio Models vs. Consensus).learning_curves.png→ Replay vs. Naïve learning curves.mermaid_cycle.md→ Mermaid diagram for generational cycle (A→B→C).
-
Clone repo:
git clone https://github.com/<youruser>/collateral.git cd collateral/code
-
Run the basic simulation:
python simulation_basic.py
→ Exports
outputs/consensus_model.json. -
Generate heatmaps:
python heatmaps.py
→ Saves
outputs/heatmaps.png. -
Generate learning curves:
python learning_curves.py
→ Saves
outputs/learning_curves.png. -
(Optional) Inspect pseudocode pipeline:
python ned_pseudocode.py
{
"model_name": "ned_consensus_v1",
"neurons": 32,
"parameters": {
"weights": [[0.01, -0.05, ...]],
"delays": [[3.12, 2.98, ...]]
},
"metadata": {
"ensemble_size": 5,
"NEDI": 2.97,
"input_group_indices": [0,1,2,3,4,5,6,7],
"output_group_indices": [24,25,26,27,28,29,30,31],
"stimulus_protocol": "pong_task"
}
}- Cortical Labs: GitHub Repository
- Hinton, G. et al. (2015). Distilling the Knowledge in a Neural Network.
- Schmidhuber, J. (1992). Learning complex, extended sequences using the principle of history compression.
- Bengio, Y. (2009). Learning Deep Architectures for AI.
This project is a conceptual thought experiment.
It does not use live biological cultures but provides a computational framework and simulation.
Real-world application with cultured neurons requires dedicated laboratory setups (e.g., Cortical Labs CL1).