Thanks to visit codestin.com
Credit goes to github.com

Skip to content

tidues/GFORS

Repository files navigation

GFORS: GPU-Accelerated First-Order Method with Randomized Sampling for Binary Integer Programs

GFORS is a GPU-accelerated inexact solution framework for binary integer programs (BIPs) of the following form:

$$ \begin{aligned} \min_{x\in\{0,1\}^n} &~ \langle x, Qx \rangle + \langle c, x \rangle + c_0\\ \text{s.t.} &~ Ax \ge b,\\ &~ Bx = d, \end{aligned} $$

where $(Q, A, B, b, d, c, c_0)$ are model parameters. While it provides probabilistic bounds on feasibility/optimality and near-stationarity guarantees on its relaxation, no formal feasibility and optimality certificates at current stage. The primary design goal is to efficiently generate solutions for large-scale BIPs, with an emphasis on achieving high practical quality. Design details and some computational results can be found in this preprint.

Installation

  1. Download or clone this repository.
  2. Install Conda.
  3. (GPU only) Ensure CUDA is available (e.g., run nvidia-smi).
  4. Create the environment from conda_envs/conda_env.yml.

Basic Usage

main.py provides the basic workflow:

  1. Instance construction: build an instance via the Instance class.
  2. Model parameters: set parameters as documented in main.py.
  3. Run: instantiate and run GPUModel.

Note: GPU acceleration typically becomes beneficial for large instances.

Instance Construction Methods

The Instance class supports:

  1. from_matrix_vector: directly supply $(Q, A, B, b, d, c, c_0)$ and sense="min" or "max"; all inputs may be None except (c).
  2. from_data_file: load parameters from files saved previously via save().

Handling Equality Constraints

Equality constraints can reduce sampling efficiency. We currently offer three options—total unimodular (TU) reformulation, customized sampling, and monotone relaxation. Usage notes are below; details appear in the preprint.

TU Reformulation

Most useful when $B$ is TU.

  1. Pass TU_index=(I, J) to from_matrix_vector, where I are the row indices of $B$ (e.g., I = list(range(B.shape[0])) to select all rows) and J are column indices of $B_I$ forming an invertible submatrix.
  2. Set TU_reform = 2 in the model parameters.

Customized Sampling

Provide a function sample_func(self, x: torch.Tensor, best_val: torch.Tensor) -> (int, torch.Tensor, torch.Tensor) that consumes the current fractional solution $x \in [0,1]^n$ and incumbent value best_val, generates candidates, and returns the best. See sampling_algos.py for documentation and examples.

Monotone Relaxation

When applicable, replace equalities by inequalities to admit supersets/subsets of the original feasible set. Repair the best incumbent post hoc or incorporate repairs during GPU iterations (try this only if the repair is GPU-efficient).

Testing Instances

Six small test instances are provided in the insts folder. Larger test instances can be downloaded from this Hugging Face repository. Note that the largest instances may require substantial GPU memory to solve.

About

GPU-Accelerated First-Order Method with Randomized Sampling for Binary Integer Programs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages