Thanks to visit codestin.com
Credit goes to github.com

Skip to content

bluepeach1121/SRFBN-S-x4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SRFBN-S ×4 Super-Resolution

This repository implements the Super-Resolution Feedback Network (SRFBN-S) small variant (T=4, G=3, m=32) for ×4 image upsampling, trained on DIV2K and evaluated on Urban100.

Project Structure

SRFBN-S-x4/
├── data/
│   ├── DIV2K_train_HR/         
│   └── HIGH_x4_Urban100/      
├── logs/                       # PSNR/SSIM plots
├── outputs/                    # final_model.pth
├── src/
│   ├── datasets.py            # Dataset classes with on-the-fly bicubic downsampling
│   ├── model.py               # SRFBN-S network implementation
│   ├── utils.py               
│   ├── train.py               
│   └── test.py               
└── README.md                 

Running

  1. Train:

    python src/train.py --hr_dir data/DIV2K_train_HR
    • Trains for 37 epochs.. was initially meant to train for 300 epochs (LR=1e-4, halved at epoch 200).
    • Saves plots to logs/train_metrics.png and model weights to outputs/final_model.pth.
  2. Test:

    python src/test.py --hr_dir data/HIGH_x4_Urban100 \
                   --model_path outputs/final_model.pth \
                   --save_sr
    
    • Prints average PSNR/SSIM on Urban100.
    • Add --save_sr to write SR images to outputs/sr_images/.

Mistakes & Iterations

During development we encountered and resolved several issues:

  1. Shape Mismatch in model.py:

    • Problem: The ConvTranspose2d parameters (kernel, stride, padding) were hard-coded for ×2, causing output size 80→160 mismatch.
    • Fix: Made the deconvolution kernel (k), stride, and padding dynamic based on scale (2, 3, or 4), matching the bicubic skip connection exactly.
  2. SSIM Window Size Exceeds Image Extent:

    • Problem: Small patches (<7×7) from random cropping triggered ValueError: win_size exceeds image extent in skimage.metrics.ssim.
    • Fix: Updated utils.ssim to compute a dynamic odd win_size <= min(height, width), and wrap calls in try/except. On failure (or too-small crops) we fall back to PSNR.
  3. Data Loading Bottlenecks:

    • Initial: Used tensor-based resizing (torchvision.transforms.functional.resize) for LR generation, and reloaded test images each batch.
    • Optimizations:
      • Switch train-time downsampling to PIL’s Image.resize (faster C implementation).
      • Preload all Urban100 HR images into memory (__init__ of SRTestDataset) to eliminate I/O overhead during inference.

License & Acknowledgements


Feel free to tweak hyperparameters, extend to other scales, or integrate additional speed-ups (e.g., grouped convolutions, checkpointing). Thanks

About

an implementation of Feedback Network for Image Super-Resolution (x4)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages