Here is the implementation of the following paper which is received by NeurIPS 2019.
This code mainly implements our algorithm on CIFAR10 and CIFAR100 datasets and uses MobileNet V1 as the backbone. To use the code, simply run
bash mbv1/train.shPlease cite this paper if you want to use it in your work,
@article{liu2019splitting,
title={Splitting Steepest Descent for Growing Neural Architectures},
author={Liu, Qiang and Wu, Lemeng and Wang, Dilin},
journal={arXiv preprint arXiv:1910.02366},
year={2019}
}
Here is our related work Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks, which is accepted by Neurips 2020. This work allows more splitting schemes with much faster speed by approximating the splitting metrics using the first-order information. [Link]
Here is an Energy-aware fast splitting version with more benchmarks implemented. [Link]
MIT License