This repository provides a guide to training and running a TensorFlow-based MNIST model locally, and deploying it on the Hailo8L chip. It includes tools for training, testing, generating images, as well as instructions for converting the trained model to a format that can be optimized and deployed on the Hailo8L.
-
Train a Convolutional Neural Network (CNN) model on the MNIST dataset using TensorFlow.
-
Export the trained model in various formats (.keras, .tflite, .onnx).
-
Run inference locally on the trained model.
-
A utility to generate images for the MNIST model, if needed.
-
Instructions for converting and optimizing the model using Hailo8L's DFC (Hailo Dataflow Compiler) (installation and usage instructions in a separate document).
- Install Dependencies
To get started, clone this repository :
git clone https://github.com/l-nmch/hailo-mnist.git
cd hailo-mnistCreate a virtualenv :
python3 -m venv .venv
source .venv/bin/activateInstall the required Python libraries
python3 -m pip install -r requirements.txt- Train the Model
Once the dependencies are installed, you can train the MNIST model using the following command:
python3 src/model.py --save mnist.kerasThis will train the model on the MNIST dataset and save it in the .keras format as mnist.keras.
- Test the Model Locally
To perform inference locally, use the predict.py script. For example, if you have an image (e.g., images/0.png), run:
python3 src/predict.py --model mnist.keras --input_file images/0.pngFor further explanation on how to run the model on Hailo NPU, head to the Documentation
This project is licensed under the MIT License - see the LICENSE file for details.