Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ai-paperwithcode/UniConvNet

Repository files navigation

Official PyTorch implementation of UniConvNet, from the following paper:

UniConvNet: Expanding Effective Receptive Field while Maintaining Asymptotically Gaussian Distribution for ConvNets of Any Scale.
ICCV 2025.
Yuhao Wang, Wei Xi
Xi'an Jiaotong University
[arXiv]


We propose UniConvNet, a pure ConvNet model constructed entirely from standard ConvNet modules. UniConvNet performs well on both lightweight and large-scale models.

Catalog

  • ImageNet-1K Training Code
  • ImageNet-22K Pre-training Code
  • ImageNet-1K Fine-tuning Code
  • Downstream Transfer (Detection, Segmentation) Code (Coming soon ...)

Results and Pre-trained Models

ImageNet-1K trained models

name resolution acc@1 #params FLOPs model(hugging face) model(baidu)
UniConvNet-A 224x224 77.0 3.4M 0.589G model model
UniConvNet-P0 224x224 79.1 5.2M 0.932G model model
UniConvNet-P1 224x224 79.6 6.1M 0.895G model model
UniConvNet-P2 224x224 80.5 7.6M 1.25G model model
UniConvNet-N0 224x224 81.6 10.2M 1.65G model model
UniConvNet-N1 224x224 82.2 13.1M 1.88G model model
UniConvNet-N2 224x224 82.7 15.0M 2.47G model model
UniConvNet-N3 224x224 83.2 19.7M 3.37G model model
UniConvNet-T 224x224 84.2 30.3M 5.1G model model
UniConvNet-T 384x384 85.4 30.3M 15.0G model model
UniConvNet-S 224x224 84.5 50.0M 8.48G model model
UniConvNet-S 384x384 85.7 50.0M 24.9G model model
UniConvNet-B 224x224 85.0 97.6M 15.9G model model
UniConvNet-B 384x384 85.9 97.6M 46.6G model model

ImageNet-22K trained models

name resolution acc@1 #params FLOPs 22k model
(hugging face)
22k model (baidu) 1k model
(hugging face)
22k model (baidu)
ConvNeXt-L 384x384 88.2 201.8M 100.1G model model model model
ConvNeXt-XL 384x384 88.4 226.7M 115.2G model model model model

Installation

Please check INSTALL.md for installation instructions.

Evaluation

We give an example evaluation command for a ImageNet-1K pre-trained UniConvNet-A:

Single-GPU

python main.py --model UniConvNet_A --eval true \
--resume https://huggingface.co/ai-modelwithcode/UniConvNet/resolve/main/uniconvnet_a_1k_224.pth \
--input_size 224 --drop_path 0.05 \
--data_path /path/to/imagenet-1k

Multi-GPU

python -m torch.distributed.launch --nproc_per_node=8 main.py \
--model UniConvNet_A --eval true \
--resume https://huggingface.co/ai-modelwithcode/UniConvNet/resolve/main/uniconvnet_a_1k_224.pth \
--input_size 224 --drop_path 0.05 \
--data_path /path/to/imagenet-1k

This should give

* Acc@1 77.030 Acc@5 93.364 loss 0.983
  • For evaluating other model variants, change --model, --resume, --input_size accordingly. You can get the url to pre-trained models from the tables above.
  • Setting model-specific --drop_path is not strictly required in evaluation, as the DropPath module in timm behaves the same during evaluation; but it is required in training. See TRAINING.md or our paper for the values used for different models.

Training

See TRAINING.md for training and fine-tuning instructions.

Acknowledgement

This repository is built using the timm library, ConvNeXt and InternImage repositories.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Citation

If you find this repository helpful, please consider citing:

@Article{wang2025uniconvnet,
  author  = {Yuhao Wang and Wei Xi},
  title   = {UniConvNet: Expanding Effective Receptive Field while Maintaining Asymptotically Gaussian Distribution for ConvNets of Any Scale},
  journal = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year    = {2025},
}

About

This is an official code for UniConvNet on ICCV 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published