This is the official PyTorch codes for the paper.
Learning High-frequency Feature Enhancement and Alignment for Pan-sharpening
Yingying Wang, Yunlong Lin, Ge Meng, Zhenqi Fu, Yuhang Dong, Linyu Fan, Hedeng Yu, Xinghao Ding*, Yue Huang( *indicates corresponding author)
- SOTA performance: The proposed HFEAN outperforms SOTA pan-sharpening methods over multiple satellite datasets.
- Ubuntu >= 18.04
- CUDA >= 11.0
- NumPy
- Matplotlib
- OpenCV
- PyYAML
# git clone this repository
git clone https://github.com/Gracewangyy/HFEAN.git
cd HFEAN
# create new anaconda env
conda create -n HFEAN python=3.8
conda activate HFEAN
pip install torch numpy matplotlib opencv-python pyyaml
Training dataset, testing dataset are available at Data.
The directory structure will be arranged as:
Data
|- WV3_data
|- train128
|- pan
|- xxx.tif
|- ms
|- xxx.tif
|- test128
|- pan
|- ms
|- WV2_data
|- train128
|- pan
|- ms
|- test128
|- pan
|- ms
|- GF2_data
|- ...
To test the trained pan-sharpening model, you can run the following command:
python test.py
The configuration options are stored in the option.yaml file and test.py. Here is an explanation of each of the options:
- algorithm: The model for testing
algorithm: The algorithm to use for testing.type: The type of testing,testdata_dir: The location of the test data.source_ms: The source of the multi-spectral data.source_pan: The source of the panchromatic data.model: The model path to use for testing.save_dir: The location to save the test results.test_config_path: The configuration file path for models
upscale: The upscale factor.batch_size: The size of each batch.patch_size: The size of each patch.data_augmentation: Whether to use data augmentation.n_colors: The number of color channels.rgb_range: The range of the RGB values.normalize: Whether to normalize the data.
Our work is based on the following projects:
If you find DIRFL is useful in your research, please cite our paper:
@inproceedings{wang2023learning,
title={Learning High-frequency Feature Enhancement and Alignment for Pan-sharpening},
author={Wang, Yingying and Lin, Yunlong and Meng, Ge and Fu, Zhenqi and Dong, Yuhang and Fan, Linyu and Yu, Hedeng and Ding, Xinghao and Huang, Yue},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={358--367},
year={2023}
}