A PyTorch implementation of DeepMind's MuZero agent.
- Environment and Requirements
- Code Structure
- Author's Notes
- Quick Start
- Start Training
- Monitoring with Tensorboard
- Evaluate Agents
- Reference Papers
- Reference Code
- License
- Citing our work
- Python 3.9.12
- pip 22.0.3
- PyTorch 1.11.0
- gym 0.23.1
- numpy 1.21.6
- each of the (
atari,classic,gomoku,tictactoe) directory contains the following modules:run_training.pytrains the agent for a specific game/control problem.eval_agent.pyevaluate the trained agent by loading from checkpoint.
config.pycontains the MuZero configuration for different game/control problem.gamesdirectory contains the custom Gomoku and Tic-Tac-Toe board game env implemented with openAI Gym.gym_env.pycontains openAI Gym wrappers for both Atari and classic control problem.mcts.pycontains the MCTS node and UCT tree-search algorithm.pipeline.pycontains the functions to run self-play, training, and evaluation loops.util.pycontains the functions for value and reward target transform and rescaling.replay.pycontains the experience replay class.trackers.pycontains the functions to monitoring training progress using Tensorboard.
- Only tested on classic control tasks and Tic-Tac-Toe.
- Hyper-parameters are not fine-tuned.
- We use uniform random replay as it seems to be better than prioritized replay.
# install homebrew, skip this step if already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# upgrade pip
python3 -m pip install --upgrade pip setuptools
# install swig which is required for box-2d
brew install swig
# install ffmpeg for recording agent self-play
brew install ffmpeg
# install snappy for compress numpy.array on M1 mac
brew install snappy
CPPFLAGS="-I/opt/homebrew/include -L/opt/homebrew/lib" pip3 install python-snappy
pip3 install -r requirements.txt
# install swig which is required for box-2d
sudo apt install swig
# install ffmpeg for recording agent self-play
sudo apt-get install ffmpeg
# upgrade pip
python3 -m pip install --upgrade pip setuptools
pip3 install -r requirements.txt
# Training on classic control problems
python3 -m muzero.classic.run_training
python3 -m muzero.classic.run_training --environment_name=LunarLander-v2
# Training on Atari
python3 -m muzero.atari.run_training
# Training on Tic-Tac-Toe
python3 -m muzero.tictactoe.run_training
# Training on Gomoku
python3 -m muzero.gomoku.run_training
tensorboard --logdir=runs
Note for board games with two players, the evaluation will run in MuZero vs. MuZero mode.
To start play the game, make sure you have a valid checkpoint file and run the following command
python3 -m muzero.tictactoe.eval_agent
python3 -m muzero.classic.eval_agent
This project is licensed under the Apache License, Version 2.0 (the "License") see the LICENSE file for details
If you reference or use our project in your research, please cite:
@software{muzero2022github,
title = {{MuZero}: A PyTorch implementation of DeepMind's MuZero agent},
author = {Michael Hu},
url = {https://github.com/michaelnny/muzero},
version = {1.0.0},
year = {2022},
}