Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A benchmark suite for evaluating Spiking Neural Networks (SNNs) on temporal processing tasks, comparing abilities of SNN-related models and learning algorithms for extended temporal sequences.

License

Notifications You must be signed in to change notification settings

liyc5929/neuroseqbench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Neuromorphic Sequential Benchmark

The goal of Neuromorphic Sequential Benchmark is to enable consistent performance comparisons across different approaches to Spiking Neural Networks (SNNs) for temporal processing and to facilitate the tracking of advancements in the field.

This repository contains the source code and implementation details associated with two research papers:

  1. "Spiking Neural Networks for Temporal Processing: Status Quo and Future Prospects", presents a systematic evaluation of the temporal processing capabilities of recently proposed SNN approaches and highlights key limitations in existing neuromorphic benchmarks.
  2. "Neuromorphic Sequential Arena: A Benchmark for Neuromorphic Temporal Processing [IJCAI 2025]", introduces a comprehensive benchmark suite tailored for neuromorphic temporal processing.

Guidelines are provided to guarantee fair and consistent evaluations of emerging SNN approaches using this repository. We warmly invite researchers and practitioners in the field of neuromorphic temporal processing to engage with us by providing feedback and contributing. By integrating more comprehensive temporal processing benchmarks and advanced SNN methods, your contributions can significantly advance this field. We value your insights and look forward to collaborating to drive innovation together.

News

  • [Aug 2025]: πŸ”₯ Synchronized with NeuroBench to support standardized evaluation metrics. See details.
  • [May 2025]: πŸŽ‰ The Neuromorphic Sequential Arena paper has been accepted to IJCAI 2025. See details.
  • [Feb 2025]: πŸš€ Launched the Neuromorphic Sequential Benchmark with an initial paper release. See details.

Table of Contents

  1. Spiking Neural Networks for Temporal Processing: Status Quo and Future Prospects
  2. Neuromorphic Sequential Arena: A Benchmark for Neuromorphic Temporal Processing



Abstract: Temporal processing is fundamental for both biological and artificial intelligence systems, as it enables the comprehension of dynamic environments and facilitates timely responses. Spiking Neural Networks (SNNs) excel in handling such data with high efficiency, owing to their rich neuronal dynamics and sparse activity patterns. Given the recent surge in the development of SNNs, there is an urgent need for a comprehensive evaluation of their temporal processing capabilities. In this paper, we first conduct an in-depth assessment of commonly used neuromorphic benchmarks, revealing critical limitations in their ability to evaluate the temporal processing capabilities of SNNs. To bridge this gap, we further introduce a benchmark suite consisting of three temporal processing tasks characterized by rich temporal dynamics across multiple timescales. Utilizing this benchmark suite, we perform a thorough evaluation of recently introduced SNN approaches to elucidate the current status of SNNs in temporal processing. Our findings indicate significant advancements in recently developed spiking neuron models and neural architectures regarding their temporal processing capabilities, while also highlighting a performance gap in handling long-range dependencies when compared to state-of-the-art non-spiking models. Finally, we discuss the key challenges and outline potential avenues for future research.

Features

The following illustration depicts the Segregated Temporal Probe (STP), an analytical tool for assessing the effectiveness of neuromorphic benchmarks in evaluating the temporal processing capabilities of SNNs. The STP incorporates three algorithmsβ€”Spatio-Temporal Backpropagation (STBP), Spatial Domain Backpropagation (SDBP), and No Temporal Domain (NoTD)β€”which systematically disrupt the temporal processing pathways within an SNN to elucidate their significance.

STP overview

The table below provides a comprehensive overview of the SNN methods that have been evaluated and compared. Each method is detailed with specific examples and their corresponding locations within the repository.

Components Description / Instances Repository Location
Neuron Model LIF, ALIF, PLIF, GLIF, Normalization Layers, etc. `neuroseqbench/network/neuron`
Neural Architecture DCLS-Delays, SpikingTCN, Gated Spiking Neuron, Spike-Driven Transformer, etc. `neuroseqbench/network/structure`
Dataset Penn Treebank, Permuted Sequential MNIST, Binary Adding, etc. `neuroseqbench/utils/dataset`

Main Results

The experimental results will be continuously updated to reflect the latest advancements in the field.

  • Results for different learning algorithms on temporal processing tasks. "FF" and "Rec." refer to "feedforward" and "recurrent" architectures, respectively. "PPL" stands for "perplexity".
Dataset PTB
($T=70$)
PS-MNIST
($T=784$)
Binary Adding
($T=100$)
Metric PPL $\downarrow$ Acc. $\uparrow$ Acc. $\uparrow$
Method FF Rec. FF Rec. FF Rec.
STBP 129.96 111.96 57.45 72.97 29.60 53.35
T-STBP 137.8 120.58 53.00 71.03 23.00 51.50
E-prop - 125.54 - 52.88 - 50.85
OTTT 141.77 - 44.61 - 17.20 -
SLTT 149.86 - 40.53 - 15.50 -
  • Results for different neuron models on temporal processing tasks
Network PTB
($T=70$)
PS-MNIST
($T=784$)
Binary Adding
($T=100$)
Metric PPL $\downarrow$ Acc. $\uparrow$ Acc. $\uparrow$
Method FF Rec. FF Rec. FF Rec.
#Params. ~5M ~6M ~90K ~160K ~20K ~40K
LIF 129.96 111.96 57.45 72.97 29.60 53.35
PLIF 123.76 105.64 55.86 77.32 29.40 53.25
ALIF 113.67 102.25 73.90 85.78 40.30 68.00
adLIF 118.52 97.22 85.93 89.53 42.00 99.05
GLIF 111.58 103.07 95.42 95.04 90.15 63.60
LTC 104.10 99.09 86.33 90.94 100.00 100.00
SPSN 120.43 - 83.88 - 45.70 -
TCLIF 286.71 255.67 86.81 92.08 19.10 19.90
LM-H 122.69 102.05 77.70 83.14 99.25 96.10
CLIF 128.28 108.21 43.90 70.44 19.10 64.30
DH-LIF 115.61 100.55 79.12 91.07 98.85 99.35
CELIF 112.35 106.52 97.76 97.66 48.40 100.00
PMSN 113.24 - 96.28 - 100.00 -
  • Results for different neural architectures on temporal processing tasks
Dataset PTB
($T=70$)
PS-MNIST
($T=784$)
Binary Adding
Metric PPL $\downarrow$ Acc. $\uparrow$ $T \uparrow$ Acc. $\uparrow$
#Params. ~5M ~90K ~40K
LIF 129.96 57.45 100 34.15
LIF w/ DCLS-Delays 89.87 68.98 100 51.85
TCN 102.20 95.10 1200 69.95
SpikingTCN 114.46 93.76 1200 61.95
LSTM 88.08 92.41 2400 100
Gated Spiking Neuron 99.98 80.13 1200 29.85
Transformer 112.43 97.64 2400 100
Spike-Driven Transformer ($T_\text{in}=4$) 152.41 96.21 2400 98.15
Spike-Driven Transformer ($T_\text{in}=1$) 327.82 95.01 2400 88.05

Steps to Reproduce Results

Dependencies

# Environment dependencies
torch, torchvision, torchaudio

# Configuration management
toml

# Data processing
datasets, h5py, tqdm

# Delay learning model
dcls

To incorporate the neuroseqbench module into your experimental code, please follow these steps:

git clone https://github.com/liyc5929/neuroseqbench.git
pip install -e .

Experiments

Each experiment in the paper has a corresponding toml configuration in a folder experiments/segregated_temproral_probe/. We also provide scripts for all experiments as follows:

  • run_01_STP_on_benchmarks.sh
  • run_02_training_algo_on_benchmarks.sh
  • run_03_surro_grad_on_benchmarks.sh
  • run_04_normalization_on_benchmarks.sh
  • run_05_spiking_neuron_on_benchmarks.sh
  • run_06_neuron_arch_on_benchmarks.sh

Here is an example to reproduce the experiments of spiking neuron models by executing the file run_05_spiking_neuron_on_benchmarks.sh, which contains the following commands:

# PennTreebank
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item PTB_LIF_feedforward --data_root /benchmark_data --device 0
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item PTB_LIF_recurrent --data_root /benchmark_data --device 0

# PS-MNIST
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item PSMNIST_LIF_feedforward --data_root /benchmark_data --device 0
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item PSMNIST_LIF_recurrent --data_root /benchmark_data --device 0

# Binary Adding
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item BinaryAdding_LIF_feedforward --data_root /benchmark_data --device 0
python ./experiments/runner.py --paper_name segregated_temporal_probe --experiment_name 05_spiking_neuron_on_benchmarks --experiment_item BinaryAdding_LIF_recurrent --data_root /benchmark_data --device 0



Abstract: Temporal processing is vital for extracting meaningful information from time-varying signals. Recent advancements in Spiking Neural Networks (SNNs) have shown immense promise in efficiently processing these signals. However, progress in this field has been impeded by the lack of effective and standardized benchmarks, which complicates the consistent measurement of technological advancements and limits the practical applicability of SNNs. To bridge this gap, we introduce the Neuromorphic Sequential Arena (NSA), a comprehensive benchmark that offers an effective, versatile, and application-oriented evaluation framework for neuromorphic temporal processing. The NSA includes seven real-world temporal processing tasks from a diverse range of application scenarios, each capturing rich temporal dynamics across multiple timescales. Utilizing NSA, we conduct extensive comparisons of recently introduced spiking neuron models and neural architectures, presenting comprehensive baselines in terms of task performance, training speed, memory usage, and energy efficiency. Our findings emphasize an urgent need for efficient SNN designs that can consistently deliver high performance across tasks with varying temporal complexities while maintaining low computational costs. NSA enables systematic tracking of advancements in neuromorphic algorithm research and paves the way for the development of effective and efficient neuromorphic temporal processing systems.

Steps to Reproduce Results

Dependencies

# Environment dependencies
torch, torchvision, torchaudio

# Configuration management
toml

# Data processing
datasets, h5py, tqdm, pandas, scipy

# S4D model
einops

# WISDM dataset
scikit-learn

To incorporate the neuroseqbench module into your experimental code, please follow these steps:

git clone https://github.com/liyc5929/neuroseqbench.git
pip install -e .

If you've configured a uv environment, you can simply run:

uv sync

to install all dependencies at once.

Data Availability

βœ… All datasets used in these experiments are hosted on our Hugging Face repository to facilitate easy access and ensure reproducibility.

πŸ“¦ For detailed dataset preparation procedures, including how to download and preprocess the raw data for each task, please refer to the dataset preparation section in the experiments/neuromorphic_sequential_arena/README.md.

Experiments

✨ Before running any experiments, please make sure all dependencies are properly installed for each task.

πŸ“Œ Note: For the AD and ASR tasks, please make sure to follow the dependency setup described in their respective AD/README.md and ASR/README.md files.

Each experiment in the paper is organized by task and placed under experiments/neuromorphic_sequential_arena/. We provide the following scripts to run all experiments for each task:

bash AL/run_all.sh
bash HAR/run_all.sh
bash EEG-MI/run_all.sh
bash SSL/run_all.sh
bash ALR/run_all.sh
bash AD/run_all.sh
bash ASR/run_all.sh

Extended Metrics Support

We provide built-in support for NeuroBench metrics, enabling standardized, hardware-agnostic evaluation of neuromorphic models. These metrics have been integrated into our pipeline and tested on selected tasks.
Implementation is available at src/neuroseqbench/utils/criterion/neurobench.

To enable NeuroBench metrics during evaluation, simply add the following flag when running your main script:

--use-neurobench-metrics

Currently supported in NSA benchmark tasks: AL, HAR, EEG-MI, and SSL.

Cite & Contact

If you find this repository helpful for your work, please cite it as follows:

@article{segregatedtemporalprobe,
    title = {Spiking Neural Networks for Temporal Processing: Status Quo and Future Prospects}, 
    author = {Chenxiang Ma and Xinyi Chen and Yanchen Li and Qu Yang and Yujie Wu and Guoqi Li and Gang Pan and Huajin Tang and Kay Chen Tan and Jibin Wu},
    year = {2025},
    volume = {abs/2502.09449},
    eprinttype = {arXiv},
    eprint = {2502.09449},
}

@article{neuromorphicsequentialarena,
    title = {Neuromorphic Sequential Arena: A Benchmark for Neuromorphic Temporal Processing}, 
    author = {Xinyi Chen and Chenxiang Ma and Yujie Wu and Kay Chen Tan and Jibin Wu},
    year = {2025},
    volume = {abs/2505.22035},
    eprinttype = {arXiv},
    eprint = {2505.22035},
}

Please file a report on our GitHub Issues page or contact us at [email protected] if you encounter any problems or have suggestions.

About

A benchmark suite for evaluating Spiking Neural Networks (SNNs) on temporal processing tasks, comparing abilities of SNN-related models and learning algorithms for extended temporal sequences.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •