This repository serves as supporting material for the paper titled Self-Attention-Based Contextual Modulation Improves Neural System Identification. The README is structured as follows:
- Dataset Sample
- Modeling
- Traditional (Simultaneous) Training
- Incremental Training
- Tuning Curves
- Calculating Neuronal Tuning Metrics: Correlation (CORR.) and Peak Tuning (PT)
- FCL Decomposition
- Attention Highlighting
As the dataset has not been publicly released, we provide a small subsample of the monkey 1 site 1 (M1S1) data. The sample contains the
- Training images
- Training responses
- Validation images
- Validation responses
See the analysis/data_sample.ipynb jupyter notebook for more details.
Models can be found in the modeling folder.
Traditional training scripts can be found in training folder.
Incremental training scripts can be found in training folder. Note the incremental loading of parameters off of already trained models.
Generation of tuning curves found in analysis. See individual neuron tuning curves, and population average tuning curves.
See analysis folder. CORR. via pearson correlation function. PT_J and PT_S based on ranking and top 10 (1%).
Extracting contributions found in analysis. Requires modifying model to save extra parameters.
Extracting HH map, and then querying center. Requires modifying model to save extra parameters.