GFORS is a GPU-accelerated inexact solution framework for binary integer programs (BIPs) of the following form:
where
- Download or clone this repository.
- Install Conda.
- (GPU only) Ensure CUDA is available (e.g., run
nvidia-smi). - Create the environment from
conda_envs/conda_env.yml.
main.py provides the basic workflow:
- Instance construction: build an instance via the
Instanceclass. - Model parameters: set parameters as documented in
main.py. - Run: instantiate and run
GPUModel.
Note: GPU acceleration typically becomes beneficial for large instances.
The Instance class supports:
-
from_matrix_vector: directly supply$(Q, A, B, b, d, c, c_0)$ andsense="min"or"max"; all inputs may beNoneexcept (c). -
from_data_file: load parameters from files saved previously viasave().
Equality constraints can reduce sampling efficiency. We currently offer three options—total unimodular (TU) reformulation, customized sampling, and monotone relaxation. Usage notes are below; details appear in the preprint.
Most useful when
- Pass
TU_index=(I, J)tofrom_matrix_vector, whereIare the row indices of$B$ (e.g.,I = list(range(B.shape[0]))to select all rows) andJare column indices of$B_I$ forming an invertible submatrix. - Set
TU_reform = 2in the model parameters.
Provide a function
sample_func(self, x: torch.Tensor, best_val: torch.Tensor) -> (int, torch.Tensor, torch.Tensor)
that consumes the current fractional solution best_val, generates candidates, and returns the best. See sampling_algos.py for documentation and examples.
When applicable, replace equalities by inequalities to admit supersets/subsets of the original feasible set. Repair the best incumbent post hoc or incorporate repairs during GPU iterations (try this only if the repair is GPU-efficient).
Six small test instances are provided in the insts folder. Larger test instances can be downloaded from this Hugging Face repository. Note that the largest instances may require substantial GPU memory to solve.