Pico-CNN is a (almost) library free and lightweight neural network inference framework for embedded systems (Linux and bare-metal) implemented in C++. Neural networks can be trained with any training framework which supports export of ONNX (open neural network exchange, onnx.ai) and afterwards deployed using Pico-CNN's ONNX import.
Please read the whole document carefully!
sudo apt install libjpeg-devIf you want to use cppunit:
sudo apt install libcppunit-devsudo yum install libjpeg-develIf you want to use cppunit:
git clone --branch cppunit-1.14.0 git://anongit.freedesktop.org/git/libreoffice/cppunit/
cd cppunit
./autogen.sh
./configure
make
sudo make install
sudo ln -s /usr/local/include/cppunit /usr/local/include/CppUnit
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATHbrew install jpegIf you want to use cppunit:
brew install cppunitDepending on the (system-) compiler you are using you might have to add a specific C++-Standard to the CFLAGS variable in the generated Makefile. If you are using an older version of g++ it should suffice to chose -std=c++11 or -std=gnu++11 as the C++-Standard.
Install Python in version 3.6.5 for example with pyenv (probably also works with other versions of Python 3.6). Then install the required Python packages with the requirements.txt file located in pico-cnn/onnx_import:
cd onnx_import
pip install -r requirements.txtOf course you can always install the requirements into a virtual environment like this:
cd onnx_import
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtIn the future you always have to activate the environment:
cd onnx_import
source venv/bin/activatecd test
make tests
./testsIf you want to use scripts from the pico-cnn/util folder you should create a virtual environment for it and install the respective python packages:
cd utils
python -m venv venv
source venv/bin/activate
pip install -r util_requirements.txtIn the future you always have to activate the environment:
cd utils
source venv/bin/activatePico-CNN was tested with the following neural networks:
- LeNet
- MNIST Multi-Layer-Perceptron (MLP)
- MNIST Perceptron
- AlexNet
- VGG-16
- VGG-19
- MobileNet-V2
- Inception-V3
- Inception-Resnet-V2
- TC-ResNet-8
For every imported onnx model a dummy_input.cpp will be generated, which uses random numbers as input and calls the network, so that no network specific input data has to be downloaded to run inferences.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/model.onnx
cd generated_code/model
make dummy_input
./dummy_input network.weights.bin NUM_RUNS GENERATE_ONCEGENERATE_ONCE = 0 will lead to new random input for each inference run.
GENERATE_ONCE = 1 will lead to the same random input for each inference run.
You can monitor overall progress of the current RUN by adding -DDEBUG in the generated Makefile.
There will also be generated a reference_input.cpp which can be used to validate the imported network against the reference input/output that is provided for onnx models from the official onnx model-zoo. The data has to be preprocessed with the following script (it is assumed that the pico-cnn/utils specific virtual environment is activated):
python util/parse_onnx_reference_files.py --input PATH_TO_REFERENCE_DATAThe script will generate an input_X.data and output_X.data file which can then be used like this:
cd onnx_import/generated_code/model
make reference_input
./reference_input network.weights.bin PATH_TO_SAMPLE_DATA/input_X.data PATH_TO_SAMPLE_DATA/output_X.dataIf the model was acquired in some other way (self-trained or converted) you can create sample data with the following script (it is assumed that the pico-cnn/utils specific virtual environment is activated):
cd util
python3.6 generate_reference_data.py --model model.onnx --file PATH_TO_INPUT_DATA --shape 1 NUM_CHANNELS HEIGHT WIDTHIf --file is not given the script will use random values instead. Supported file types are audio/x-wav, image/jpeg and image/x-portable-greymap.
LeNet-5 implementation as proposed by Yann LeCun et. al [LeCun1998] ONNX model at: ./data/lenet/lenet.onnx
Note: MNIST dataset required to run the LeNet specific code.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/lenet/lenet.onnxCopy examples/lenet.cpp from examples folder to onnx_import/generated_code/lenet.
cd generated_code/lenet
make lenet
./lenet network.weights.bin PATH_TO_MNISTONNX model at: ./data/mnist_mlp/mnist_mlp.onnx
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/mnist_mlp/mnist_mlp.onnxCopy examples/mnist_mlp.cpp to onnx_import/generated_code/mnist_mlp.
cd generated_code/mnist_mlp
make mnist_mlp
./mnist_mlp network.weights.bin PATH_TO_MNISTSingle fully-connected layer. ONNX model at: ./data/mnist_simple_perceptron/mnist_simple_perceptron.onnx
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/mnist_simple_perceptron/mnist_simple_perceptron.onnxCopy examples/mnist_simple_perceptron.cpp to onnx_import/generated_code/mnist_simple_perceptron.
cd generated_code/mnist_simple_perceptron
make mnist_simple_perceptron
./mnist_simple_perceptron network.weights.bin PATH_TO_MNISTAlexNet [Krizhevsky2017] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/alexnet
Note: ImageNet dataset required to run the AlexNet specific code.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/alexnet.onnxCopy examples/alexnet.cpp to onnx_import/generated_code/alexnet.
cd generated_code/alexnetAdd -ljpeg to LDFLAGS in Makefile
make alexnet
./alexnet network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGEVGG-16 [Simonyan2014] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/vgg
Note: ImageNet dataset required to run the VGG-16 specific code.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/vgg16.onnxCopy examples/vgg.cpp to onnx_import/generated_code/vgg16.cpp.
cd generated_code/vgg16Add -ljpeg to LDFLAGS in Makefile
make vgg16
./vgg16 network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGEVGG-19 [Simonyan2014] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/vgg
Note: ImageNet dataset required to run the VGG-19 specific code.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/vgg19.onnxCopy examples/vgg.cpp to onnx_import/generated_code/vgg19.cpp.
cd generated_code/vgg19Add -ljpeg to LDFLAGS in Makefile
make vgg19
./vgg19 network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGEMobileNet-V2 [Sandler2019] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/mobilenet
See Reference Input section for details on input and output data generation.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/mobilenetv2-1.0.onnx
cd generated_code/mobilenetv2-1
make reference_input
./reference_input network.weights.bin input.data output.dataInception-V3 [Szegedy2014] ONNX model retrieved from: http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz and converted using TensorFlow slim.
See Reference Input section for details on input and output data generation.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/inceptionv3.onnx
cd generated_code/inceptionv3
make reference_input
./reference_input network.weights.bin input.data output.dataInception-ResNet-V2 [Szegedy2016] ONNX model retrieved from: http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz and converted using TensorFlow slim.
See Reference Input section for details on input and output data generation.
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/inception_resnet_v2.onnx
cd generated_code/inception_resnet_v2
make reference_input
./reference_input network.weights.bin input.data output.dataTC-ResNet-8 [Choi2019] trained on the Google Speech Commands Dataset
cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/tc-res8-update.onnx
cd generated_code/tc-res8-update
make reference_input
./reference_input network.weights.bin input.data output.data- Alexander Jung (Maintainer) @alexjung
- Konstantin Lübeck (Maintainer) @k0nze
- Nils Weinhardt (Developer) @sea-buckthorn
Please cite Pico-CNN in your publications if it helps your research:
@inproceedings{luebeck2019picocnn,
author = {Lübeck, Konstantin and Bringmann, Oliver},
title = {A Heterogeneous and Reconfigurable Embedded Architecture for Energy-Efficient Execution of Convolutional Neural Networks},
booktitle = {Architecture of Computing Systems (ARCS 2019)},
year = {2019},
month = {May},
pages = {267--280},
address = {Copenhagen, Denmark},
isbn = {978-3-030-18656-2}
}
[LeCun1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998. ↩
[Krizhevsky2017] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017. ↩
[Simonyan2014] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv:1409.1556 [cs], Sep. 2014. ↩
[Sandler2019] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” arXiv:1801.04381 [cs], Mar. 2019. ↩
[Szegedy2014] C. Szegedy et al., “Going Deeper with Convolutions,” arXiv:1409.4842 [cs], Sep. 2014. ↩
[Szegedy2016] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” arXiv:1602.07261 [cs], Aug. 2016. ↩
[Choi2019] S. Choi et al., “Temporal Convolution for Real-time Keyword Spotting on Mobile Devices,” arXiv:1904.03814 [cs, eess], Nov. 2019. ↩