Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A Quantization method for PyTorch framework. Implementing lower than 8 bits quantization-aware and post training quantization methods

Notifications You must be signed in to change notification settings

eakirtas/torch_fquant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Quantization Framework for PyTorch

A framework for fake quantization in PyTorch implementing several quantization-aware and post-training quantization method. Additionally, it implements the following quantization methods:

Methods Post Training Quantization Aware Related Code
Normalized normalized.py
Moving Average movingaverage.py
MinMax minmax.py
MinMaxSTD minmaxstd.py
SimplerMinMax minmax_simpler.py
Mixed Precision gaussian_qscheduler.py

Demonstrations for the methodologies can be found in:

Citations

@article{kirtas2022quantization,
  title={Quantization-aware training for low precision photonic neural networks},
  author={Kirtas, Manos and Oikonomou, Athina and Passalis, Nikolaos and Mourgias-Alexandris, George and Moralis-Pegios, Miltiadis and Pleros, Nikos and Tefas, Anastasios},
  journal={Neural Networks},
  volume={155},
  pages={561--573},
  year={2022},
  publisher={Elsevier}
}
@article{kirtas2023mixed,
  title={Mixed-precision quantization-aware training for photonic neural networks},
  author={Kirtas, Manos and Passalis, Nikolaos and Oikonomou, Athina and Moralis-Pegios, Miltos and Giamougiannis, George and Tsakyridis, Apostolos and Mourgias-Alexandris, George and Pleros, Nikolaos and Tefas, Anastasios},
  journal={Neural Computing and Applications},
  volume={35},
  number={29},
  pages={21361--21379},
  year={2023},
  publisher={Springer}
}

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No 871391 (PlasmoniAC). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.

About

A Quantization method for PyTorch framework. Implementing lower than 8 bits quantization-aware and post training quantization methods

Topics

Resources

Stars

Watchers

Forks

Languages