Thanks to visit codestin.com
Credit goes to github.com

Skip to content

wzwvv/DBConformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

66 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

DBConformer

Dual-Branch Convolutional Transformer for EEG Decoding

Ziwei Wang1, Hongbin Wang1, Tianwang Jia1, Xingyi He1, Siyang Li1, and Dongrui Wu1 ๐Ÿ“ง

1 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology

(๐Ÿ“ง) Corresponding Author

DBConformerย 

This repository contains the implementation of our paper: "DBConformer: Dual-Branch Convolutional Transformer for EEG Decoding", serving as a benchmark codebase for EEG decoding models. We implemented and fairly evaluated 13 state-of-the-art EEG decoding models, including CNN-based, CNN-Transformer hybrid, and CNN-Mamba hybrid EEG decoding models.

Overview

๐Ÿ“ฐ News: DBConformer has been accepted for publication in the IEEE Journal of Biomedical and Health Informatics (IEEE JBHI). The final version will be available soon. Congratulations! ๐ŸŽ‰

๐Ÿ“ฐ News: We've released the supplementary material for DBConformer.

๐Ÿ“ฐ News! We've reproduced and added three recent EEG decoding baseline models, including MSVTNet, MSCFormer, and TMSA-Net.

DBConformer, a dual-branch convolutional Transformer network tailored for EEG decoding:

  • T-Conformer: Captures temporal dependencies
  • S-Conformer: Models spatial patterns
  • A lightweight channel attention module further refines spatial representations by assigning data-driven importance to EEG channels
image

Features

  • ๐Ÿ”€ Dual-branch parallel design for symmetric spatio-temporal modeling
  • ๐Ÿงฉ Plug-and-play channel attention for data-driven channel weighting
  • ๐Ÿ“ˆ Strong generalization across CO, CV, and LOSO settings
  • ๐Ÿ’ก Interpretable aligned well with sensorimotor priors in MI
  • ๐Ÿงฎ 8ร— fewer parameters than large CNN-Transformer baselines (e.g., EEG Conformer)

Comparison of network architectures among CNNs (EEGNet, SCNN, DCNN, etc), traditional serial Conformers (EEG Conformer, CTNet, etc), and the proposed DBConformer. DBConformer has two branches that parallel capture temporal and spatial characteristics.

image

Code Structure

DBConformer/
โ”‚
โ”œโ”€โ”€ DBConformer_CO.py       # Main script for Chronological Order (CO) scenario
โ”œโ”€โ”€ DBConformer_CV.py       # Main script for Cross-Validation (CV) scenario
โ”œโ”€โ”€ DBConformer_LOSO.py     # Main script for Leave-One-Subject-Out (LOSO) scenario
โ”‚
โ”œโ”€โ”€ models/                 # Model architectures (DBConformer and baselines)
โ”‚   โ”œโ”€โ”€ DBConformer.py      # Dual-branch Convolutional Transformer (Ours)
โ”‚   โ”œโ”€โ”€ EEGNet.py           # Classic CNN model
โ”‚   โ”œโ”€โ”€ SCNN.py             # Classic CNN model
โ”‚   โ”œโ”€โ”€ DCNN.py             # Classic CNN model
โ”‚   โ”œโ”€โ”€ FBCNet.py           # Frequency-aware CNN model
โ”‚   โ”œโ”€โ”€ ADFCNN.py           # Two-branch CNN model
โ”‚   โ”œโ”€โ”€ IFNet.py            # Frequency-aware CNN model
โ”‚   โ”œโ”€โ”€ EEGWaveNet.py       # Multi-scale CNN model
โ”‚   โ”œโ”€โ”€ SlimSeiz.py         # Serial CNN-Mamba baseline
โ”‚   โ”œโ”€โ”€ CTNet.py            # Serial CNN-Transformer baseline
โ”‚   โ”œโ”€โ”€ MSVTNet.py          # Serial CNN-Transformer baseline
โ”‚   โ”œโ”€โ”€ MSCFormer.py        # Serial CNN-Transformer baseline
โ”‚   โ”œโ”€โ”€ TMSA-Net.py         # Serial CNN-Transformer baseline
โ”‚   โ””โ”€โ”€ EEGConformer.py     # Serial CNN-Transformer baseline
โ”‚
โ”œโ”€โ”€ data/                   # Dataset
โ”‚   โ”œโ”€โ”€ BNCI2014001/
โ”‚   โ””โ”€โ”€ ...
โ”‚
โ”œโ”€โ”€ utils/                  # Helper functions and common utilities
โ”‚   โ”œโ”€โ”€ data_utils.py           # EEG preprocessing, etc
โ”‚   โ”œโ”€โ”€ alg_utils.py           # Euclidean Alignment, etc
โ”‚   โ”œโ”€โ”€ network.py        # Backbone definition
โ”‚   โ””โ”€โ”€ ...
โ”‚
โ””โ”€โ”€ README.md

Baselines

Ten EEG decoding models were reproduced and compared with the proposed DBConformer in this paper. DBConformer achieves the state-of-the-art performance.

  • CNNs: EEGNet, SCNN, DCNN, FBCNet, ADFCNN, IFNet, EEGWaveNet
  • Serial Conformers: CTNet, EEG Conformer
  • CNN-Mamba: SlimSeiz
image

Datasets

DBConformer is evaluated on MI classification and seizure detection tasks. MI datasets can be downloaded from MOABB, and NICU dataset. The processed BNCI2014001 dataset can be found in MVCNet.

  • Motor Imagery:
    • BNCI2014001
    • BNCI2014004
    • Zhou2016
    • Blankertz2007
    • BNCI2014002
  • Seizure Detection:
    • CHSZ
    • NICU

Experimental Scenarios

DBConformer supports four standard EEG decoding paradigms:

  • CO (Chronological Order): Within-subject, EEG trials were partitioned strictly based on temporal sequence, with the first 80% used for training and the remaining 20% for testing.
  • CV (Cross-Validation): Within-subject, stratified 5-fold validation. The data partitions were structured chronologically while maintaining class-balance.
  • LOSO (Leave-One-Subject-Out): Cross-subject generalization evaluation. EEG trials from one subject were reserved for testing, while all other subjectsโ€™ trials were combined for training.
  • CD (Cross-Dataset): Cross-dataset generalization evaluation. Training and testing were performed on distinct EEG datasets, e.g., training on BNCI2014001 and testing on BNCI2014004. The CD results are shown in Table S1 of the supplementary material.

Visualizations

Effect of Dual-Branch Modeling

To further evaluate the impact of dual-branch architecture, we conducted feature visualization experiments using t-SNE. Features extracted by T-Conformer (temporal branch only) and DBConformer (dual-branch) were compared on four MI datasets.

image

Visualization of Spatio-Temporal Self-Attention

To further examine the interpretability of DBConformer, we visualized the self-attention matrices learned in both temporal and spatial branches on BNCI2014001, BNCI2014002, and OpenBMI datasets.

image

Interpretability of Channel Attention

To investigate the interpretability of the proposed channel attention module, we visualized the attention scores assigned to each EEG channel across 32 trials (a batch) from four MI datasets. BNCI2014004 were excluded from this analysis, as it only contains C3, Cz, and C4 channels and therefore lacks spatial coverage for attention comparison.

image

Sensitivity Analysis on Architectural Design

We further conducted a sensitivity analysis to explore how architectural design affects the DBConformer performance.

image

๐Ÿ“„ Citation

If you find this work helpful, please consider citing our paper:

@Article{wang2025dbconformer,
        author  = {Ziwei Wang and Hongbin Wang and Tianwang Jia and Xingyi He and Siyang Li and Dongrui Wu},
        journal = {IEEE Journal of Biomedical and Health Informatics},
        title   = {DBConformer: Dual-branch convolutional Transformer for EEG decoding},
        year    = {2025},
        note    = {Early Access},
        pages   = {1--14},
        doi     = {10.1109/JBHI.2025.3622725},
}

๐Ÿ™Œ Acknowledgments

Special thanks to the source code of EEG decoding models: EEGNet, IFNet, EEG Conformer, FBCNet, CTNet, ADFCNN, EEGWaveNet, SlimSeiz, MSVTNet, MSCFormer, and TMSA-Net.

We appreciate your interest and patience. Feel free to raise issues or pull requests for questions or improvements.

About

[IEEE JBHI 2025] DBConformer: Dual-Branch Convolutional Transformer for EEG Decoding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages