Ziwei Wang1, Hongbin Wang1, Tianwang Jia1, Xingyi He1, Siyang Li1, and Dongrui Wu1 ๐ง
1 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
(๐ง) Corresponding Author
This repository contains the implementation of our paper: "DBConformer: Dual-Branch Convolutional Transformer for EEG Decoding", serving as a benchmark codebase for EEG decoding models. We implemented and fairly evaluated 13 state-of-the-art EEG decoding models, including CNN-based, CNN-Transformer hybrid, and CNN-Mamba hybrid EEG decoding models.
๐ฐ News: DBConformer has been accepted for publication in the IEEE Journal of Biomedical and Health Informatics (IEEE JBHI). The final version will be available soon. Congratulations! ๐
๐ฐ News: We've released the supplementary material for DBConformer.
๐ฐ News! We've reproduced and added three recent EEG decoding baseline models, including MSVTNet, MSCFormer, and TMSA-Net.
DBConformer, a dual-branch convolutional Transformer network tailored for EEG decoding:
- T-Conformer: Captures temporal dependencies
- S-Conformer: Models spatial patterns
- A lightweight channel attention module further refines spatial representations by assigning data-driven importance to EEG channels
- ๐ Dual-branch parallel design for symmetric spatio-temporal modeling
- ๐งฉ Plug-and-play channel attention for data-driven channel weighting
- ๐ Strong generalization across CO, CV, and LOSO settings
- ๐ก Interpretable aligned well with sensorimotor priors in MI
- ๐งฎ 8ร fewer parameters than large CNN-Transformer baselines (e.g., EEG Conformer)
Comparison of network architectures among CNNs (EEGNet, SCNN, DCNN, etc), traditional serial Conformers (EEG Conformer, CTNet, etc), and the proposed DBConformer. DBConformer has two branches that parallel capture temporal and spatial characteristics.
DBConformer/
โ
โโโ DBConformer_CO.py # Main script for Chronological Order (CO) scenario
โโโ DBConformer_CV.py # Main script for Cross-Validation (CV) scenario
โโโ DBConformer_LOSO.py # Main script for Leave-One-Subject-Out (LOSO) scenario
โ
โโโ models/ # Model architectures (DBConformer and baselines)
โ โโโ DBConformer.py # Dual-branch Convolutional Transformer (Ours)
โ โโโ EEGNet.py # Classic CNN model
โ โโโ SCNN.py # Classic CNN model
โ โโโ DCNN.py # Classic CNN model
โ โโโ FBCNet.py # Frequency-aware CNN model
โ โโโ ADFCNN.py # Two-branch CNN model
โ โโโ IFNet.py # Frequency-aware CNN model
โ โโโ EEGWaveNet.py # Multi-scale CNN model
โ โโโ SlimSeiz.py # Serial CNN-Mamba baseline
โ โโโ CTNet.py # Serial CNN-Transformer baseline
โ โโโ MSVTNet.py # Serial CNN-Transformer baseline
โ โโโ MSCFormer.py # Serial CNN-Transformer baseline
โ โโโ TMSA-Net.py # Serial CNN-Transformer baseline
โ โโโ EEGConformer.py # Serial CNN-Transformer baseline
โ
โโโ data/ # Dataset
โ โโโ BNCI2014001/
โ โโโ ...
โ
โโโ utils/ # Helper functions and common utilities
โ โโโ data_utils.py # EEG preprocessing, etc
โ โโโ alg_utils.py # Euclidean Alignment, etc
โ โโโ network.py # Backbone definition
โ โโโ ...
โ
โโโ README.md
Ten EEG decoding models were reproduced and compared with the proposed DBConformer in this paper. DBConformer achieves the state-of-the-art performance.
- CNNs: EEGNet, SCNN, DCNN, FBCNet, ADFCNN, IFNet, EEGWaveNet
- Serial Conformers: CTNet, EEG Conformer
- CNN-Mamba: SlimSeiz
DBConformer is evaluated on MI classification and seizure detection tasks. MI datasets can be downloaded from MOABB, and NICU dataset. The processed BNCI2014001 dataset can be found in MVCNet.
- Motor Imagery:
- BNCI2014001
- BNCI2014004
- Zhou2016
- Blankertz2007
- BNCI2014002
- Seizure Detection:
- CHSZ
- NICU
DBConformer supports four standard EEG decoding paradigms:
- CO (Chronological Order): Within-subject, EEG trials were partitioned strictly based on temporal sequence, with the first 80% used for training and the remaining 20% for testing.
- CV (Cross-Validation): Within-subject, stratified 5-fold validation. The data partitions were structured chronologically while maintaining class-balance.
- LOSO (Leave-One-Subject-Out): Cross-subject generalization evaluation. EEG trials from one subject were reserved for testing, while all other subjectsโ trials were combined for training.
- CD (Cross-Dataset): Cross-dataset generalization evaluation. Training and testing were performed on distinct EEG datasets, e.g., training on BNCI2014001 and testing on BNCI2014004. The CD results are shown in Table S1 of the supplementary material.
To further evaluate the impact of dual-branch architecture, we conducted feature visualization experiments using t-SNE. Features extracted by T-Conformer (temporal branch only) and DBConformer (dual-branch) were compared on four MI datasets.
To further examine the interpretability of DBConformer, we visualized the self-attention matrices learned in both temporal and spatial branches on BNCI2014001, BNCI2014002, and OpenBMI datasets.
To investigate the interpretability of the proposed channel attention module, we visualized the attention scores assigned to each EEG channel across 32 trials (a batch) from four MI datasets. BNCI2014004 were excluded from this analysis, as it only contains C3, Cz, and C4 channels and therefore lacks spatial coverage for attention comparison.
We further conducted a sensitivity analysis to explore how architectural design affects the DBConformer performance.
If you find this work helpful, please consider citing our paper:
@Article{wang2025dbconformer,
author = {Ziwei Wang and Hongbin Wang and Tianwang Jia and Xingyi He and Siyang Li and Dongrui Wu},
journal = {IEEE Journal of Biomedical and Health Informatics},
title = {DBConformer: Dual-branch convolutional Transformer for EEG decoding},
year = {2025},
note = {Early Access},
pages = {1--14},
doi = {10.1109/JBHI.2025.3622725},
}
Special thanks to the source code of EEG decoding models: EEGNet, IFNet, EEG Conformer, FBCNet, CTNet, ADFCNN, EEGWaveNet, SlimSeiz, MSVTNet, MSCFormer, and TMSA-Net.
We appreciate your interest and patience. Feel free to raise issues or pull requests for questions or improvements.

