A Rust toolkit for game-theoretically optimal poker strategies, implementing state-of-the-art algorithms for No-Limit Texas Hold'em with functional parity to Pluribus1.
|
Monte Carlo Tree Search |
Equity Distributions |
- Fastest open-source hand evaluator - Nanosecond evaluation outperforming Cactus Kev
- Strategic abstraction - Hierarchical k-means clustering of 3.1T poker situations
- Optimal transport - Earth Mover's Distance via Sinkhorn algorithm
- MCCFR solver - External sampling with dynamic tree construction
- PostgreSQL persistence - Binary format serialization for efficiency
- Short deck support - 36-card variant with adjusted rankings
Add robopoker to your Cargo.toml:
[dependencies]
rbp = "1.0"
# Or individual crates:
rbp-cards = "1.0"
rbp-gameplay = "1.0"
rbp-mccfr = "1.0"use rbp::cards::*;
use rbp::gameplay::*;
// Create a hand and evaluate it
let hand = Hand::from("AcKsQhJdTc9h8s");
let strength = hand.evaluate();
// Work with observations
let obs = Observation::from(Street::Flop);
let equity = obs.equity();| Crate | Description |
|---|---|
rbp |
Facade re-exporting all public crates |
rbp-core |
Type aliases, constants, DTOs, shared traits |
rbp-cards |
Card primitives, hand evaluation, equity |
rbp-transport |
Optimal transport (Sinkhorn, EMD) |
rbp-mccfr |
Game-agnostic CFR framework |
rbp-gameplay |
Poker game engine |
rbp-clustering |
K-means abstraction |
rbp-nlhe |
No-Limit Hold'em solver |
rbp-database |
PostgreSQL persistence layer |
rbp-auth |
JWT + Argon2 authentication |
rbp-gameroom |
Async game coordinator, players, hand history |
rbp-server |
Unified HTTP server (analysis API + game hosting) |
rbp-autotrain |
Training orchestration with distributed workers |
rbp-cards — Card representation, hand evaluation, and strategic primitives:
- Bijective card representations (
u8/u16/u32/u64) for efficient operations - Lazy hand strength evaluation in nanoseconds
- Equity calculation via enumeration and Monte Carlo
- Exhaustive iteration over cards, hands, decks, and observations
- Short deck (36-card) variant support
rbp-transport — Optimal transport algorithms:
- Sinkhorn iteration for near-linear Wasserstein approximation5
- Greenhorn optimization for sparse distributions
- Generic
Measureabstraction for arbitrary metric spaces
rbp-mccfr — Game-agnostic CFR framework:
- State primitives:
Turn,Edge,Game,Info,Tree - Strategy representation:
Encoder,Profile,InfoSet - Training:
Solvertrait with pluggable algorithms - Schemes:
RegretSchedule,PolicySchedule,SamplingScheme - Subgame solving with safe search
rbp-gameplay — Complete poker game engine:
- Full No-Limit Texas Hold'em rules
- Complex showdown handling (side pots, all-ins, ties)
- Bet sizing abstraction via
Sizeenum (SPR(n,d)/BBs(n)) - Clean Node/Edge/Tree game state representation
rbp-clustering — Hand abstraction via clustering:
- Hierarchical k-means with Elkan acceleration
- Earth Mover's Distance between distributions
- Isomorphic exhaustion of 3.1T situations4
- PostgreSQL binary persistence
rbp-nlhe — Concrete NLHE solver:
NlheSolver<R, W, S>with pluggable regret/policy/samplingNlheEncoderfor state→info mappingNlheProfilefor regret/strategy storage- Production config:
Flagshiptype alias
rbp-database — PostgreSQL persistence:
- Binary format serialization for efficient storage
- Schema definitions and streaming I/O via
COPY INprotocol Sourcetrait for SELECT,Sinktrait for INSERT/UPDATE- Training stage tracking and validation
rbp-gameroom — Async game coordination:
- Room-based multiplayer game management
- Pluggable player implementations (AI, human, network)
- Hand history recording and replay
rbp-server — Unified HTTP server:
- Analysis API for querying training results
- Game hosting with WebSocket support
- Authentication integration
rbp-autotrain — Training orchestration:
- Two-phase: clustering then MCCFR
- Fast (in-memory) and slow (distributed) modes
- Graceful interrupts and resumable state
- Timed training via
TRAIN_DURATION
-
Hierarchical Abstraction (per street: river → turn → flop → preflop):
- Generate isomorphic hand clusters
- Initialize k-means centroids via k-means++2
- Run clustering to group strategically similar hands
- Calculate EMD metrics via optimal transport5
- Save abstractions to PostgreSQL
-
MCCFR Training3:
- Sample game trajectories via external sampling
- Update regret values and counterfactual values
- Accumulate strategy with linear weighting
- Checkpoint blueprint strategy to database
-
Real-time Search (in progress):
- Depth-limited subgame solving10
- Blueprint strategy as prior
- Targeted Monte Carlo rollouts
| Street | Abstraction Size | Metric Size |
|---|---|---|
| Preflop | 4 KB | 301 KB |
| Flop | 32 MB | 175 KB |
| Turn | 347 MB | 175 KB |
| River | 3.02 GB | - |
Recommended:
- Training: 16 vCPU, 120GB RAM
- Database: PostgreSQL 14+ with 8 vCPU, 64GB RAM
- Analysis: 1 vCPU, 4GB RAM
| Feature | Description |
|---|---|
database |
PostgreSQL integration |
server |
Server dependencies (Actix, Tokio, Rayon) |
shortdeck |
36-card short deck variant |
# Build all crates
cargo build --workspace
# Build with database features
cargo build --workspace --features database
# Run tests
cargo test --workspace
# Generate documentation
cargo doc --workspace --no-deps --open- (2019). Superhuman AI for multiplayer poker. (Science)
- (2014). Potential-Aware Imperfect-Recall Abstraction with Earth Mover's Distance in Imperfect-Information Games. (AAAI)
- (2007). Regret Minimization in Games with Incomplete Information. (NIPS)
- (2013). A Fast and Optimal Hand Isomorphism Algorithm. (AAAI)
- (2018). Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. (NIPS)
- (2019). Solving Imperfect-Information Games via Discounted Regret Minimization. (AAAI)
- (2013). Action Translation in Extensive-Form Games with Large Action Spaces. (IJCAI)
- (2015). Discretization of Continuous Action Spaces in Extensive-Form Games. (AAMAS)
- (2015). Regret-Based Pruning in Extensive-Form Games. (NIPS)
- (2018). Depth-Limited Solving for Imperfect-Information Games. (NeurIPS)
- (2017). Reduced Space and Faster Convergence in Imperfect-Information Games via Pruning. (ICML)
- (2017). Safe and Nested Subgame Solving for Imperfect-Information Games. (NIPS)
MIT License - see LICENSE for details.