WraAct constructs tight convex hull approximations of activation functions for sound and efficient neural network verification.
📖 See also:
This repo is based on the following papers and provides implementations for the algorithms described therein. This is a regularly maintained and updated repo for the algorithm part.ReLU Hull Approximation (POPL'24) (Ma et al., 2024).
Convex Hull Approximation for Activation Functions (OOPSLA'25) (Ma et al., 2025)
- 🔍 Precise Approximations - Generates mathematically sound convex hull constraints for various activation functions
- 🚀 Performance Optimized - Uses Numba JIT compilation for fast constraint generation
- 🧮 Multiple Activation Types - Supports ReLU, Sigmoid, Tanh, LeakyReLU, ELU, and many more
- 🔄 V-H Representation - Efficiently converts between vertex and halfspace representations with pycddlib
- 🌐 Multi-Dimensional Support - Handles both unary and multi-variable activation functions
# Clone the repository
git clone https://github.com/ZhongkuiMa/rover_alpha.git
cd rover_alpha/wraact
# Install dependencies
pip install pycddlib==2.1.8.post1 numpy==2.2.4 numba==0.61.2This tutorial introduces the concept of the function hull and the algorithm to calculate the function hull of an activation function. The function hull, represented by a set of linear constraints, provides sound constraints for neural network verification.
A polytope (high-dimensional polyhedron) can be defined by:
- A set of halfspaces (linear constraints), called H-representation.
- A set of vertices, called V-representation.
ℹ️ Note:
These are basic concepts in computational geometry. See any computational geometry textbook for more.
💡 Tip:
Here, we only discuss bounded convex polytopes (no unbounded ones).
Formal definitions are available in computational geometry literature.
A halfspace is the set of points satisfying a linear inequality:
where:
-
$\boldsymbol{b} \in \mathbb{R}^m$ is the bias term. -
$\boldsymbol{A} \in \mathbb{R}^{m \times d}$ is the matrix of coefficients.
All constraints are stored together as:
Shape:
- First column: bias terms
$b$ . - Remaining columns: variable coefficients.
Vertices directly define the polytope.
Vertices are stored as:
Shape:
- First column: all ones (to indicate vertices).
- Rest: vertex coordinates.
🔗 Reference:
We use the same format as pycddlib.
Activation functions are non-linear functions applied to neuron outputs.
We classify them by:
- Input dimension: Unary vs Multi-variable.
- Shape: ReLU-like vs S-shaped.
-
Unary Activation Functions:
- Form:
$f: \mathbb{R} \rightarrow \mathbb{R}$ (scalar to scalar). - Examples: ReLU, LeakyReLU, ELU, SiLU, Sigmoid, Tanh.
- Function hull calculated for multiple neurons.
- Form:
-
Multi-Variable Activation Functions:
- Form:
$f: \mathbb{R}^n \rightarrow \mathbb{R}$ (vector to scalar). - Examples: MaxPool.
- Function hull calculated for a single neuron.
- Form:
💬 Note:
Strictly speaking, we are computing function hull over-approximations, but we simply call them function hulls.
-
ReLU-like Activation Functions:
- Examples: ReLU, LeakyReLU, ELU.
- Construct a DLP (double linear piece) upper bound.
- Behave like:
- In negative region: almost constant (e.g., 0).
- In positive region: close to identity (
$y = x$ ).
-
S-shaped Activation Functions:
- Examples: Sigmoid, Tanh.
- Construct two DLPs (upper and lower bounds).
- Behave like:
- Negative region: close to constant (e.g., 0).
- Positive region: close to constant (e.g., 1).
- Monotonically increasing or decreasing.
💡 Tip:
No strict mathematical definition for "ReLU-like" or "S-shaped"; it's based on behavior.
The Function Hull is a polytope in the input-output space that encloses the graph of an activation function over a given input domain.
We only consider bounded convex polytopes and focus on their H-representation.
The goal is to construct a polytope that wraps the graph of the activation function.
-
$\boldsymbol{C} \in \mathbb{R}^{m \times (n+1)}$ : Input polytope constraints. -
$\boldsymbol{l}, \boldsymbol{u} \in \mathbb{R}^n$ : (Optional) Lower and upper bounds of input variables.
- Constraints defining the function hull of the activation function.
- Extend output dimension one by one.
- Construct convex/concave piece-wise linear bounds.
- Use DLP (double linear piece) functions where needed.
- Input constraints:
- Constructed linear pieces:
- Compute the quotient:
🔍 Efficient Calculation:
Enumerate all vertices — efficient for low dimensions (2–10).
- Then take:
- And add constraints depending on convexity:
or
We warmly welcome contributions from everyone! Whether it's fixing bugs 🐞, adding features ✨, improving documentation 📚, or just sharing ideas 💡—your input is appreciated!
📌 NOTE: Direct pushes to the main branch are restricted. Make sure to fork the repository and submit a Pull Request for any changes!