A domain-specific ML library, generally used for training NNUE-style networks for many of the strongest chess engines in the world due to its best-in-class performance, chess-specific tooling and ease of use.
Before attempting to use, check out the docs. They contain all the main information about building bullet, managing training data and the network output format.
Most people simply clone the repo and edit one of the examples to their taste.
If you want to create your own example file to ease pulling from upstream, you need to add the example to bullet_lib's Cargo.toml.
Alternatively, import the bullet_lib crate with
bullet = { git = "https://github.com/jw1912/bullet", package = "bullet_lib" }Specific API documentation is covered by Rust's docstrings. You can create local documentations with cargo doc.
- bullet_core
- An ML framework that is generic over backends
- A network graph is constructed using
GraphBuilder, which internally generates aGraphIR - Optimisation passes are performed on the
GraphIR - The
GraphIRis then compiled into aGraph<D: Device>, for a specific backend device- Upon which forwards and backwards passes, editing weights/inputs, etc, may be performed
- A small set of (composable) optimisers are included that ingest a graph and provide update methods for it
- A token single-threaded CPU backend is included for verifying correctness of the crate and other backend implementations
- See the MNIST example for using
bullet_coreas a general-purpose ML framework
- bullet_cuda_backend
- A working but incomplete CUDA backend rewrite, not currently suitable for serious use
- bullet_hip_backend
- Currently contains both the HIP (for AMD GPUs) and CUDA backends. Enable the
hipfeature to use the HIP backend
- Currently contains both the HIP (for AMD GPUs) and CUDA backends. Enable the
- bullet_lib
- Provides a high-level wrapper around the above crates specifically for training networks to do with chess (and other games e.g. Ataxx) easily
- What backend is used is dictated by passed feature flags:
- By default the CUDA backend from
bullet_hip_backendis used, you should not pass any feature flags if you want to use the CUDA backend - Enable the
hipfeature to use the HIP backend only if you have an AMD card - Read the documentation for more specific instructions
- By default the CUDA backend from
- Value network training for games with
Trainer- The simple example shows ease-of-use in training the simplest NNUE architectures
- The progression examples show how to incrementally improve your NNUE architecture
- bullet-utils
- Various utilities mostly to do with handling data
- Please open an issue to file any bug reports/feature requests.
- Feel free to use the dedicated
#bulletchannel in the Engine Programming discord server if you run into any issues. - For general training discussion the Engine Programming non-
#bulletchannels are appropriate, or#engines-devin the Stockfish discord.