-
Finding Visual Saliency in Continuous Spike Stream is accepted by AAAI 2024!
-
Recurrent Spiking Transformers for Saliency Detection in Continuous Integration-based Visual Streams is under review.
🎯 Codes for "Recurrent Spiking Transformers for Saliency Detection in Continuous Integration-based Visual Streams"
- Journal version:
Recurrent Spiking Transformers for Saliency Detection in Continuous Integration-based Visual Streams. - Code for journal version will be available soon!
- torch >= 1.8.0
- torchvison >= 0.9.0
- ...
To installl requirements, run:
conda create -n svs python==3.7
pip install -r requirements.txtDownload the SVS[3tqk] dataset, then organize data as following format:
root_dir
SpikeData
|----00001
| |-----spike_label_format
| |-----spike_numpy
| |-----spike_repr
| |-----label
|----00002
| |-----spike_label_format
| |-----spike_numpy
| |-----spike_repr
| |-----label
|----...
Where label contains the saliency labels, spike_numpy contains the compress spike data, spike_repr contains the interval spike representation, spike_label_format contains instance labels.
To train the model on SVS dataset, just modify the dataset root $cfg.DATA.ROOT in config.py, --step is used for multi-step, --clip is used for multi-step loss, then run following command:
python train.py --gpu ${GPU-IDS} --exp_name ${experiment} --step --clipDownload the model pretrained on SVS dataset multi_step[vn2x].
python inference.py --checkpoint ${./multi_step.pth} --results ${./results/SVS} --stepDownload the model pretrained on SVS dataset single_step[scc0].
python inference.py --checkpoint ${./single_step.pth} --results ${./results/SVS}The results will be saved as indexed png file at ${results}/SVS.
Additionally, you can modify some setting parameters in config.py to change configuration.
This codebase is built upon official DCFNet repository and official Spikformer repository. We modify the code from eval-co-sod to evaluate the results.