GenBaB is a framework for neural network verification with branch-and-bound for supporting general nonlinearties. GenBaB is generally formulated so that the branch-and-bound can be applied to verification on general computational graphs with general nonlinearities. GenBaB also leverages the new flexibility from general nonlinearities beyond piecewise linear ReLU to make smarter decisions for the branching, with an improved branching heuristic named BBPS for choosing neurons to branch, and a mechanism for optimizing branching points on nonlinear functions being branched. GenBaB has been demonstrated on a wide range of NNs, including NNs with activation functions such as Sigmoid, Tanh, Sine and GeLU, as well as NNs involving multi-dimensional nonlinear operations such as LSTMs and Vision Transformers. GenBaB has also enabled new applications beyond simple NNs, such as AC Optimal Power Flow (ACOPF).
The paper of GenBaB has been accepted by the 31st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2025):
Zhouxing Shi*, Qirui Jin*, Zico Kolter, Suman Jana, Cho-Jui Hsieh, Huan Zhang. Neural Network Verification with Branch-and-Bound for General Nonlinearities. To appear in TACAS 2025. (*Equal contribution)
@inproceedings{shi2025genbab,
title={Neural Network Verification with Branch-and-Bound for General Nonlinearities},
author={Shi, Zhouxing and Jin, Qirui and Kolter, Zico and Jana, Suman and Hsieh, Cho-Jui and Zhang, Huan},
booktitle={International Conference on Tools and Algorithms for the Construction and Analysis of Systems},
year={2025}
}The GenBaB algorithm is implemented into our comprehensive α,β-CROWN toolbox (our paper considered α,β-CROWN without GenBaB as a baseline, but GenBaB is integrated into the newer α,β-CROWN). α,β-CROWN has been included as a submodule in this repository. Clone this repository along with the α,β-CROWN submodule by:
git clone --recursive https://github.com/shizhouxing/GenBaB.git
cd GenBaBBenchmarks used in the GenBaB paper are hosted at a HuggingFace repository. Download them to a benchmarks folder by:
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/zhouxingshi/GenBaB benchmarksPython 3.11+ and PyTorch 2.2+ compatible with CUDA are required. GenBaB has been tested with Python 3.11 and PyTorch 2.2. We recommend using miniconda to setup a clean Python environment and install PyTorch 2.2.0.
If you are using a Linux x86 environment, you may install miniconda to ~/miniconda3 by:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh -b -u -p ~/miniconda3
~/miniconda3/bin/conda init bash
bashCreate and activate a new environment with Python 3.11 for GenBaB:
conda create --name GenBaB -y python=3.11
conda activate GenBaBUse conda to install PyTorch 2.2 compatible with your CUDA version:
# If you are using CUDA 11.8
conda install -y pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia
# if you are using CUDA 12.1 or above
conda install -y pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=12.1 -c pytorch -c nvidia
After setting up Python and PyTorch, install other dependencies by:
cd alpha-beta-CROWN
pip install -e .
cd complete_verifier
pip install -r requirements.txtWe have attached the conda environment we used at environments.yml, with Python 3.11, PyTorch 2.2, and CUDA 12.1.
If you have CUDA with 12.1+, you may directly create the conda environment from the environments.yml file with exactly the same dependency versions by:
conda env create -f environment.ymlRun code from the alpha-beta-CROWN/complete_verifier directory.
The basic usage of running GenBaB on a model is by running abcrown.py
with a YAML configuration file.
python abcrown.py --config CONFIG_FILEFor the benchmarks we used, a configuration file has already been included in each benchmark folder. For example, to run GenBaB on the Sigmoid 4x100 model:
python abcrown.py --config ../../benchmarks/cifar/sigmoid_4fc_100/config.yamlA list of commands to run GenBaB on all the experimented benchmarks is at run.sh.
Before running commands in run.sh, it is recommended to run warmup.sh first. The warmup script will run each kind of model on a single instance with a short timeout to build the lookup table of pre-optimized branching points. Since this lookup table can be shared by all the instances with existing model architectures, the warmup step can separate the time cost of building the lookup table from the main experiments. Otherwise, the cost of pre-optimizing branching points may be counted toward the first instance of each new model architecture.
Options to run variants of GenBaB or the baseline without branch-and-bound:
--complete_verifier skip: Disable branch-and-bound.--nonlinear_split_method babsr-like: Use a BaBSR-like branching heuristic instead of BBPS proposed in the paper.--branching_point_method uniform: Disable optimized branching points.--nonlinear_split_relu_only: For models with a mix of ReLU and other nonlinearities, only consider branching ReLU neurons, not other nonlinearities.
The design of GenBaB is intended to be general for models containing various nonlinearities.
To run GenBaB on new models, it is recommended to prepare the model and specifications for verification following the general VNN-COMP format.
Specifically, in a folder, models can be provided as ONNX files, and specifications should be provided using the VNN-LIB format.
There should also be a CSV file instances.csv listing the instances, where each row in the CSV file contains the path to the ONNX file, VNN-LIB file, and the timeout (in seconds) for each instance.
See the example of ml4acopf, as well as all the benchmarks used in VNN-COMP 2024.
Then, add a configuration file to the folder. By default, you may use default_config.yaml. Then, follow the usage to run GenBaB.
We also provide a Dockerfile for automatically setting up the environment.
Assuming your system has a GPU.
Build the image:
docker build -t genbab .Start and enter a container:
docker run -di --shm-size=16g --gpus all --name genbab-instance genbab
docker exec -ti genbab-instance /bin/bash
conda activate GenBaB
cd GenBaB/alpha-beta-CROWN/complete_verifierYou may now try running GenBaB on an example benchmark. Run:
python abcrown.py --config ../../benchmarks/cifar/sigmoid_4fc_100/config.yamlBuild the image:
docker build -t genbab --build-arg CPU=1 .Start and enter a container:
docker run -di --shm-size=16g --name genbab-instance genbab
docker exec -ti genbab-instance /bin/bash
conda activate GenBaB
cd GenBaB/alpha-beta-CROWN/complete_verifierYou may now try running GenBaB on an example benchmark. Run:
python abcrown.py --config ../../benchmarks/cifar/sigmoid_4fc_100/config.yaml --device cpu