Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Neuro-Ensemble Distillation (NED) — A cross-domain framework for stabilizing cultured-neuron computing using Cortical Labs platforms (CL1, DishBrain). Includes paper, simulations, and supplementary code (Python/JSON IR).

License

Notifications You must be signed in to change notification settings

sbeierle/cortical_ned

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Neuro-Ensemble Distillation (NED): A Cortical Labs–Inspired Framework for Stabilizing Biocomputing with Cultured Neurons

Paper: Neuro-Ensemble Distillation: A Cross-Domain Framework for Stabilizing Biocomputing with Cultured Neurons
Author: Stefan Beierle (Independent Researcher)
Zenodo DOI: https://zenodo.org/records/17259126


📖 Overview

This repository contains the supplementary material and code for the NED paper.
NED is a thought experiment bridging biological learning substrates (cultured neurons) with classical ensemble learning and policy distillation from machine learning.

The framework addresses the stability challenge of cultured-neuron systems by:

  • Training multiple cultures in parallel.
  • Averaging and aligning extracted policies.
  • Distilling a robust Consensus Policy.
  • Replaying it into new cultures or neuromorphic silicon.

📂 Repository Contents

paper/

  • NED_Paper_v1.0.pdf → Full text of the paper.

code/

  • ned_pseudocode.py → Conceptual NED pipeline (Stimulate → Record → Extract → Align → Distill → Replay).
  • simulation_basic.py → Runs a simplified LIF-based simulation and exports consensus_model.json.
  • heatmaps.py → Visualizes policy noise reduction (individual cultures vs. consensus).
  • learning_curves.py → Demonstrates acceleration through replay.
  • consensus_model.json → Example Consensus Policy IR with metadata.

figures/

  • heatmaps.png → Policy heatmaps (Bio Models vs. Consensus).
  • learning_curves.png → Replay vs. Naïve learning curves.
  • mermaid_cycle.md → Mermaid diagram for generational cycle (A→B→C).

🛠️ How to Run

  1. Clone repo:

    git clone https://github.com/<youruser>/collateral.git
    cd collateral/code
  2. Run the basic simulation:

    python simulation_basic.py

    → Exports outputs/consensus_model.json.

  3. Generate heatmaps:

    python heatmaps.py

    → Saves outputs/heatmaps.png.

  4. Generate learning curves:

    python learning_curves.py

    → Saves outputs/learning_curves.png.

  5. (Optional) Inspect pseudocode pipeline:

    python ned_pseudocode.py

📊 JSON IR Example

{
  "model_name": "ned_consensus_v1",
  "neurons": 32,
  "parameters": {
    "weights": [[0.01, -0.05, ...]],
    "delays": [[3.12, 2.98, ...]]
  },
  "metadata": {
    "ensemble_size": 5,
    "NEDI": 2.97,
    "input_group_indices": [0,1,2,3,4,5,6,7],
    "output_group_indices": [24,25,26,27,28,29,30,31],
    "stimulus_protocol": "pong_task"
  }
}

📌 References

  • Cortical Labs: GitHub Repository
  • Hinton, G. et al. (2015). Distilling the Knowledge in a Neural Network.
  • Schmidhuber, J. (1992). Learning complex, extended sequences using the principle of history compression.
  • Bengio, Y. (2009). Learning Deep Architectures for AI.

⚠️ Disclaimer

This project is a conceptual thought experiment.
It does not use live biological cultures but provides a computational framework and simulation.
Real-world application with cultured neurons requires dedicated laboratory setups (e.g., Cortical Labs CL1).

About

Neuro-Ensemble Distillation (NED) — A cross-domain framework for stabilizing cultured-neuron computing using Cortical Labs platforms (CL1, DishBrain). Includes paper, simulations, and supplementary code (Python/JSON IR).

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages