Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Everything you could wish for in a library called RoboPoker. Full suite of data structures, algorithms, solvers, ML models, and more.

License

Notifications You must be signed in to change notification settings

krukah/robopoker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

832 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

robopoker

license build

A Rust toolkit for game-theoretically optimal poker strategies, implementing state-of-the-art algorithms for No-Limit Texas Hold'em with functional parity to Pluribus1.

Visual Tour

Training Progress
Monte Carlo Tree Search
Strategy Growth
Equity Distributions

Features

  • Fastest open-source hand evaluator - Nanosecond evaluation outperforming Cactus Kev
  • Strategic abstraction - Hierarchical k-means clustering of 3.1T poker situations
  • Optimal transport - Earth Mover's Distance via Sinkhorn algorithm
  • MCCFR solver - External sampling with dynamic tree construction
  • PostgreSQL persistence - Binary format serialization for efficiency
  • Short deck support - 36-card variant with adjusted rankings

Quick Start

Add robopoker to your Cargo.toml:

[dependencies]
rbp = "1.0"

# Or individual crates:
rbp-cards = "1.0"
rbp-gameplay = "1.0"
rbp-mccfr = "1.0"

Basic Usage

use rbp::cards::*;
use rbp::gameplay::*;

// Create a hand and evaluate it
let hand = Hand::from("AcKsQhJdTc9h8s");
let strength = hand.evaluate();

// Work with observations
let obs = Observation::from(Street::Flop);
let equity = obs.equity();

Crate Overview

Crate Description
rbp Facade re-exporting all public crates
rbp-core Type aliases, constants, DTOs, shared traits
rbp-cards Card primitives, hand evaluation, equity
rbp-transport Optimal transport (Sinkhorn, EMD)
rbp-mccfr Game-agnostic CFR framework
rbp-gameplay Poker game engine
rbp-clustering K-means abstraction
rbp-nlhe No-Limit Hold'em solver
rbp-database PostgreSQL persistence layer
rbp-auth JWT + Argon2 authentication
rbp-gameroom Async game coordinator, players, hand history
rbp-server Unified HTTP server (analysis API + game hosting)
rbp-autotrain Training orchestration with distributed workers

Architecture

Core Layer

rbp-cards — Card representation, hand evaluation, and strategic primitives:

  • Bijective card representations (u8/u16/u32/u64) for efficient operations
  • Lazy hand strength evaluation in nanoseconds
  • Equity calculation via enumeration and Monte Carlo
  • Exhaustive iteration over cards, hands, decks, and observations
  • Short deck (36-card) variant support

rbp-transport — Optimal transport algorithms:

  • Sinkhorn iteration for near-linear Wasserstein approximation5
  • Greenhorn optimization for sparse distributions
  • Generic Measure abstraction for arbitrary metric spaces

rbp-mccfr — Game-agnostic CFR framework:

  • State primitives: Turn, Edge, Game, Info, Tree
  • Strategy representation: Encoder, Profile, InfoSet
  • Training: Solver trait with pluggable algorithms
  • Schemes: RegretSchedule, PolicySchedule, SamplingScheme
  • Subgame solving with safe search

Domain Layer

rbp-gameplay — Complete poker game engine:

  • Full No-Limit Texas Hold'em rules
  • Complex showdown handling (side pots, all-ins, ties)
  • Bet sizing abstraction via Size enum (SPR(n,d) / BBs(n))
  • Clean Node/Edge/Tree game state representation

rbp-clustering — Hand abstraction via clustering:

  • Hierarchical k-means with Elkan acceleration
  • Earth Mover's Distance between distributions
  • Isomorphic exhaustion of 3.1T situations4
  • PostgreSQL binary persistence

rbp-nlhe — Concrete NLHE solver:

  • NlheSolver<R, W, S> with pluggable regret/policy/sampling
  • NlheEncoder for state→info mapping
  • NlheProfile for regret/strategy storage
  • Production config: Flagship type alias

Infrastructure Layer

rbp-database — PostgreSQL persistence:

  • Binary format serialization for efficient storage
  • Schema definitions and streaming I/O via COPY IN protocol
  • Source trait for SELECT, Sink trait for INSERT/UPDATE
  • Training stage tracking and validation

rbp-gameroom — Async game coordination:

  • Room-based multiplayer game management
  • Pluggable player implementations (AI, human, network)
  • Hand history recording and replay

rbp-server — Unified HTTP server:

  • Analysis API for querying training results
  • Game hosting with WebSocket support
  • Authentication integration

rbp-autotrain — Training orchestration:

  • Two-phase: clustering then MCCFR
  • Fast (in-memory) and slow (distributed) modes
  • Graceful interrupts and resumable state
  • Timed training via TRAIN_DURATION

Training Pipeline

  1. Hierarchical Abstraction (per street: river → turn → flop → preflop):

    • Generate isomorphic hand clusters
    • Initialize k-means centroids via k-means++2
    • Run clustering to group strategically similar hands
    • Calculate EMD metrics via optimal transport5
    • Save abstractions to PostgreSQL
  2. MCCFR Training3:

    • Sample game trajectories via external sampling
    • Update regret values and counterfactual values
    • Accumulate strategy with linear weighting
    • Checkpoint blueprint strategy to database
  3. Real-time Search (in progress):

    • Depth-limited subgame solving10
    • Blueprint strategy as prior
    • Targeted Monte Carlo rollouts

System Requirements

Street Abstraction Size Metric Size
Preflop 4 KB 301 KB
Flop 32 MB 175 KB
Turn 347 MB 175 KB
River 3.02 GB -

Recommended:

  • Training: 16 vCPU, 120GB RAM
  • Database: PostgreSQL 14+ with 8 vCPU, 64GB RAM
  • Analysis: 1 vCPU, 4GB RAM

Feature Flags

Feature Description
database PostgreSQL integration
server Server dependencies (Actix, Tokio, Rayon)
shortdeck 36-card short deck variant

Building

# Build all crates
cargo build --workspace

# Build with database features
cargo build --workspace --features database

# Run tests
cargo test --workspace

# Generate documentation
cargo doc --workspace --no-deps --open

References

  1. (2019). Superhuman AI for multiplayer poker. (Science)
  2. (2014). Potential-Aware Imperfect-Recall Abstraction with Earth Mover's Distance in Imperfect-Information Games. (AAAI)
  3. (2007). Regret Minimization in Games with Incomplete Information. (NIPS)
  4. (2013). A Fast and Optimal Hand Isomorphism Algorithm. (AAAI)
  5. (2018). Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. (NIPS)
  6. (2019). Solving Imperfect-Information Games via Discounted Regret Minimization. (AAAI)
  7. (2013). Action Translation in Extensive-Form Games with Large Action Spaces. (IJCAI)
  8. (2015). Discretization of Continuous Action Spaces in Extensive-Form Games. (AAMAS)
  9. (2015). Regret-Based Pruning in Extensive-Form Games. (NIPS)
  10. (2018). Depth-Limited Solving for Imperfect-Information Games. (NeurIPS)
  11. (2017). Reduced Space and Faster Convergence in Imperfect-Information Games via Pruning. (ICML)
  12. (2017). Safe and Nested Subgame Solving for Imperfect-Information Games. (NIPS)

License

MIT License - see LICENSE for details.

Contributors 4

  •  
  •  
  •  
  •  

Languages