FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. The resulting FPGA accelerators are highly efficient and can yield high throughput and low latency.
FINN runs in a Docker container. (bash ./run-docker.sh notebook)
FINN: 0.10
Docker: 25.0.1
Vivado: 2022.1
Vitis HLS: 2022.1
Ubuntu: 20.04
Python: 3.11.5
Xilinx Pynq-Z2 (Version: 2.7)
https://github.com/Xilinx/finn
https://finn.readthedocs.io/en/latest/
For more detail, please check the FINN, and Model Training sections of the thesis in the link below.