Install uv with the command
pipx install uv
Create the environment with the following command
uv venv alphaswarm --python=3.12and activate the environment
source .venv/bin/activate
# source alphaswarm/bin/activate # on MacOS
Alternatively, you can use conda / mamba to create the environment and install all required packages (installation used for all benchmarks and experiments):
git clone https://github.com/schwallergroup/alphaswarm.git
cd alphaswarm
conda env create -f environment.yml
# mamba env create -f environment.yml
python -m pip install -e .To run a benchmark, define a configuration file, e.g. benchmark.toml:
file_path = "data/benchmark/virtual_experiment.csv" # Path to the dataset with features and target
y_columns = ["AP yield", "AP selectivity"] # Columns containing the objectives values
exclude_columns = ["catalyst", "base"] # (Optional) Columns to exclude from the feature set used for modelling, usually contains text data
seed = 42 # Seed for reproducibility
n_iter = 3 # Number of iterations
n_particles = 24 # Number of particles (batch size)
init_method = "sobol" # Initialisation method (random, sobol, LHS, halton)
algo = "alpha-pso" # Algorithm to use (canonical-pso, alpha-pso, qnehvi, sobol)
[pso_params] # only for (canonical-pso, alpha-pso)
c_1 = 1.0 # Cognitive parameter
c_2 = 1.0 # Social parameter
c_a = 1.0 # ML parameter
w = 1.0 # Inertia parameter
n_particles_to_move = [0, 0] # Number of particles to move directly to ML predictions at each iteration after initialisation (list size = iteration_number - 1)
objective_function = "weighted_sum" # Objective function to use (weighted_sum, weighted_power, ...)
[obj_func_params]
weights = [1.0, 1.0] # Weights for the weighted sum objective function
noise = 0.0 # Noise to add to the objectives
[model_config]
kernel = "MaternKernel" # Kernel to use (MaternKernel, KMaternKernel)
kernel_params = "default" # Kernel parameters
training_iter = 1000 # Number of iterations for trainingThen, run the benchmark with the following command:
alphaswarm benchmark benchmark.tomlTo run an experimental campaign, you need a chemical space file (.csv) that contains your reaction features. This file must also include a column named rxn_id for the reaction identifiers.
If your file includes non-feature columns describing reaction conditions (e.g., catalyst, base, solvent), you must list them in the configuration file under the exclude_columns parameter. This ensures they are excluded from the model's feature set. The config files used in the manuscript accompanying this repository can be found in the /configs directory.
An example of a configuration file is shown below:
Warning
The chemical space features for the experimental campaign must be normalised between 0 and 1.
Normalisation can be done with the normalise_features(...) function.
chemical_space_file = "data/experimental_campaigns/example/chemical_space.csv" # Path to the chemical space
exclude_columns = ["ligand", "solvent", "precursor", "base"] # (Optional) Columns to exclude from the input features, usually columns containing text data (rxn_id is automatically excluded)
iteration_number = 1 # Number of iterations (1 for the initialisation)
seed = 42 # Random seed for reproducibility
n_particles = 96 # Number of particles (batch size)
init_method = "sobol" # Initialisation method (random, sobol, LHS, halton)
algo = "alpha-pso" # Algorithm to use (canonical-pso, alpha-pso, qnehvi, sobol)
[pso_params] # only for (canonical-pso, alpha-pso)
c_1 = 1.0 # Cognitive parameter
c_2 = 1.0 # Social parameter
c_a = 1.0 # ML parameter
w = 1.0 # Inertia parameter
n_particles_to_move = [0] # Number of particles to move directly to ML predictions at each iteration after initialisation (list size = iteration_number - 1)
objective_columns = ["AP yield", "AP selectivity"] # Columns specifying the objectives
# Suggestions path/file format
pso_suggestions_path = "data/experimental_campaigns/example/pso_plate_suggestions" # output path for the PSO suggestions
pso_suggestions_format = "PSO_plate_{}_suggestions.csv" # file format of the PSO suggestions
# Experimental/Training data path/file format
experimental_data_path = "data/experimental_campaigns/example/pso_training_data" # path to the experimental data
experimental_data_format = "PSO_plate_{}_train.csv" # file format of the training data
[model_config]
kernel = "MaternKernel" # Kernel to use (MaternKernel, KMaternKernel)
kernel_params = "default" # Kernel parameters
training_iter = 1000 # Number of iterations for trainingThen, run the experimental campaign with the following command:
alphaswarm experimental experimental.tomlThe package is structured as follows:
πalphaswarm/
βββ LICENSE # MIT License file
βββ README.md # Installation and usage instructions
|ββ tox.ini # Configuration file for tox (testing)
βββ pyproject.toml # Project configuration file
βββ environment.yml # Configuration file for conda environment
βββ data/
βΒ Β βββ benchmark/ # Contains the virtual experiments for benchmarking
βΒ Β βΒ Β βββ buchwald_virtual_benchmark.csv
βΒ Β βΒ Β βββ ni_suzuki_virtual_benchmark.csv
β β βββ sulfonamide_virtual_benchmark.csv
βΒ Β βΒ Β βββ experimental_data/ # Contains the experimental data for training emulators
βΒ Β βΒ Β Β Β βββ buchwald_train_data.csv
βΒ Β βΒ Β Β Β βββ ni_suzuki_train_data.csv
β β βββ sulfonamide_train_data.csv
βΒ Β βββ experimental_campaigns/
βΒ Β βΒ Β βββ pso_suzuki/ # Example of an experimental campaign
βΒ Β βΒ Β βββ chemical_spaces/ # Contains the chemical spaces
βΒ Β βΒ Β β βββ pso_suzuki_chemical_space.csv
β β βββ configs/ # Contains the config .toml files use to obtain experimental suggestions
β β β βββ pso_suzuki_iter_1.toml
β βΒ Β Β Β βΒ Β ...
βΒ Β βΒ Β βββ pso_plate_suggestions/ # Contains the experimental suggestions
βΒ Β βΒ Β Β Β | βββ PSO_suzuki_plate_1_suggestions.csv
βΒ Β βΒ Β Β Β | ...
βΒ Β βΒ Β βββ pso_training_data/ # Contains the training data (experimental results)
βΒ Β βΒ Β βββ PSO_suzuki_plate_1_train.csv
βΒ Β βΒ Β ...
β βββ HTE_datasets/ # Contains the experimental HTE datasets in SURF format
β β βββ pd_sulfonamide_SURF.csv
β β βββ pd_suzuki_SURF.csv
βββ src/
βΒ Β βββ alphaswarm/
βΒ Β βββ __about__.py
βΒ Β βββ __init__.py
βΒ Β βββ cli.py # Command line interface tools
βΒ Β βββ configs.py # Configurations for benchmark and experimental campaigns
βΒ Β βββ metrics.py # Metrics for the benchmark
βΒ Β βββ objective_functions.py # Objective functions for the benchmark
βΒ Β βββ pso.py # Main PSO algorithm
βΒ Β βββ swarms.py # Particle and Swarm classes
βΒ Β βββ acqf/ # Acquisition functions
βΒ Β βΒ Β βββ acqf.py
βΒ Β βΒ Β βββ acqfunc.py
βΒ Β βββ models/ # Surrogate models
βΒ Β βΒ Β βββ gp.py # Gaussian Process models
βΒ Β βββ utils/
βΒ Β βββ logger.py # Logger for the package
βΒ Β βββ moo_utils.py # Utilities for multi-objective optimisation
βΒ Β βββ tensor_types.py # Type definitions for tensors
βΒ Β βββ utils.py # General utilities
βββ tests/ # Contains all the unit tests
All data is stored in the data/ directory. The benchmark/ directory contains the virtual experiments used for benchmarking. The experimental_campaigns/ directory contains the chemical spaces and the experimental data for the experimental campaigns.
See developer instructions
To install, run
pip install -e ".[test]"To run style checks:
uv pip install pre-commit
pre-commit run -aRuff is used for linting and type checking. To run the tests, use the following command:
ruff check src/ --fixTo test:
uv pip install tox
python -m tox r -e py312Tensor shapes can be checked using jaxtyping. To check the shapes, set the TYPECHECK environment variable to 1 and run code normally:
export TYPECHECK=1Works after running tox
uv pip install "genbadge[coverage]"
genbadge coverage -i coverage.xml