TorchANI 2.0 is an open-source library that supports training, development, and research of ANI-style neural network interatomic potentials. It was originally developed and is currently maintained by the Roitberg group. For information and examples, please see the comprehensive documentation.
If you were using a previous version of TorchANI and your code does not work with TorchANI 2.0 check out the migration guide, there are very few breaking changes, most code should work with minimal modifications. If you can't figure something out please open a GitHub issue, we are here to help!
If you find a bug of TorchANI, or have some feature request, also feel free to open a GitHub issue. TorchANI 2.0 is currently tested against PyTorch 2.8 and CUDA 12.8
If you find this work useful please cite the following articles:
- TorchANI 2.0: An extensible, high performance library for the design, training, and use of NN-IPs
Preprint, re-implementation of TorchANI enabling this interface: https://chemrxiv.org/engage/chemrxiv/article-details/6890d92523be8e43d6b9bbba - TorchANI: A Free and Open Source PyTorch-Based Deep Learning Implementation of the ANI Neural Network Potentials
Original TorchANI implementation: https://pubs.acs.org/doi/10.1021/acs.jcim.0c00451
To run molecular dynamics (full ML or ML/MM) with Amber (sander or pmemd) check out the TorchANI-Amber interface, and the relevant publications:
- TorchANI-Amber: Bridging neural network potentials and classical biomolecular simulations
Preprint, main TorchANI-Amber article: https://chemrxiv.org/engage/chemrxiv/article-details/68a63e8b728bf9025e64ee01 - Advancing Multiscale Molecular Modeling with Machine Learning-Derived Electrostatics
For the ML/MM capabilities: https://pubs.acs.org/doi/10.1021/acs.jctc.4c01792
Coming Soon!
Coming Soon!
To build and install TorchANI directly from the GitHub repo do the following:
# Clone the repo and cd to the directory
git clone https://github.com/aiqm/torchani.git
cd ./torchani_sandbox
# Create a conda (or mamba) environment
# Note that environment.yaml contains many optional dependencies needed to
# build the compiled extensions, build the documentation, and run tests and tools
# You can comment these out if you are not planning to do that
conda env create -f ./environment.yaml
Instead of using a conda
(or mamba
) environment you can use a python venv
,
and install the torchani optional dependencies
running pip install -r dev_requirements.txt
.
Now you have two options, depending on whether you want to install the torchani compiled extensions. To install torchani with no compiled extensions run:
pip install --no-deps -v .
To install torchani with the cuAEV and MNP compiled extensions run instead:
# Use 'ext-all-sms' instead of 'ext' if you want to build for all possible GPUs
pip install --config-settings=--global-option=ext --no-build-isolation --no-deps -v .
In both cases you can add the editable, -e
, flag after the verbose, -v
,
flag if you want an editable install (for developers). The -v
flag can of
course be omitted, but it is sometimes handy to have some extra information
about the installation process.
After this you can perform some optional steps if you installed the required dev dependencies:
# Download files needed for testing and building the docs (optional)
bash ./download-dev-data.sh
# Build the documentation (optional)
sphinx-build docs/src docs/build
# Manually run unit tests (optional)
cd ./tests
pytest -v .
This process works for most use cases, but for more details regarding building the CUDA and C++ extensions refer to TorchANI CSRC.
Note that there is no CUDA support on macOS
and TorchANI is untested with
Apple Metal Performance Shaders (MPS). The environment.yaml
file needs
slight modifications if installing on macOS
. Please consult the corresponding
file and modify it before creating the conda
environment.
TorchANI can be run in CUDA-enabled GPUs. This is highly recommended unless doing simple debugging or tests. If you don't run TorchANI on a GPU, expect highly degraded performance. TorchANI is untested with AMD GPUs (ROCm | HIP).
A CUDA extension for speeding up AEV calculations and a C++ extension for parallelizing networks (MNP or Multi Net Parallel) using OpenMP are compiled by default in the conda package. They have to be built manually if installed from GitHub.
Torchani provides an executable script, torchani
, with some utilities. Check
usage by calling torchani --help
.
Please cite the following paper if you use TorchANI:
- Xiang Gao, Farhad Ramezanghorbani, Olexandr Isayev, Justin S. Smith, and
Adrian E. Roitberg. TorchANI: A Free and Open Source PyTorch Based Deep
Learning Implementation of the ANI Neural Network Potentials. Journal of
Chemical Information and Modeling 2020 60 (7), 3408-3415,
- Refer to isayev/ASE_ANI for ANI model references.
- Never commit to the master branch directly. If you need to change something, create a new branch and submit a PR on GitHub.
- All the tests on GitHub must pass before your PR can be merged.
- Code review is required before merging a pull request.
The CUDA libraries specified by the pytorch-cuda metapackage are not enough to
build the extensions; the *-dev
versions with the headers are required. We
also pin the version of nvcc
, since pytorch
can't directly compile
extensions with newer nvcc
. We also pin the version of cccl
due to torch's
usage of thrust. Explicitly specifying setuptools
and setuptools-scm
is
required since extensions have to be built with --no-build-isolation
.
Finally, g++
and gcc
compilers that support C++17
are required for
compilation (compiler version is pinned to ensure reproducibility). The
required conda pkgs are then (sans the version constraints):
- setuptools
- setuptools-scm
- gxx_linux-64
- gcc_linux-64
- nvidia::cuda-libraries-dev
- nvidia::cuda-cccl
- nvidia::cuda-nvcc
The conda package can be built locally using the recipe in ./recipe
, by running:
cd ./torchani_sandbox
conda install conda-build conda-verify
mkdir ./conda-pkgs/ # This dir must exist before running conda-build
conda build \
-c pytorch -c nvidia -c conda-forge \
--no-anaconda-upload \
--output-folder ./conda-pkgs/ \
./recipe
The meta.yaml
in the recipe assumes that the extensions are built using the
system's CUDA Toolkit, located in /usr/local/cuda
. If this is not possible,
add the following dependencies to the host
environment:
nvidia::cuda-libraries-dev={{ cuda }}
nvidia::cuda-nvcc={{ cuda }}
nvidia::cuda-cccl={{ cuda }}
and remove cuda_home=/usr/local/cuda
from the build script. Note that doing
this may significantly increase build time.
The CI (GitHub Actions Workflow) that tests that the conda pkg builds correctly runs only:
- on pull requests that contain the string
conda
in the branch name.
The workflow that deploys the conda pkg to the internal server runs only:
- on the default branch, at 00:00:00 every day
- on pull requests that contain both the strings
conda
andrelease
in the branch name