This repository contains code and scripts for reproducing experimental results from our work.
Install conda:
# For conda: https://docs.conda.io/projects/conda/en/stable/user-guide/install/linux.html
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh && \
bash miniconda.sh -b -p /opt/condaSetup a conda environment and install dependencies using:
conda env create -f requirements.yamlActivate the environment:
conda activate e2d2-envWe also include a setup_env.sh script that can be used to set up the
environment on a new machine.
Run the script using:
source setup_env.shYou can also include this snippet in shell / slurm scripts to set up the environment on a compute node.
In this script, we set up WandB and HuggingFace tokens by sourcing a script which is
expected to be in the /home/<YOUR_USER_NAME>/ directory.
Copy the contents below into a shell script /home/<YOUR_USER_NAME>/setup_discdiff.sh
and replace the placeholder tokens with your own:
# W&B / HF Setup
export WANDB__SERVICE_WAIT=600
export _WANDB_STARTUP_DEBUG="true"
export WANDB_ENTITY="<WANDB_ENTITY>"
export WANDB_API_KEY="<WANDB_API_KEY>"
echo "Logging into W&B as '${WANDB_ENTITY}'."
# HF Setup
export HUGGINGFACE_TOKEN="<HF_TOKEN>"
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credentialWe will try to use GitHub issues to track bugs, features, and todos. To contribute to the repo, please create a new issue and assign it to yourself. Then create a new branch from the issue and open a pull request.
We use pre-commit to run linters and formatters on the code. To install the pre-commit hooks, run:
pre-commit installOn every git commit,
the pre-commit hooks will run automatically and report any issues / automatic fixes that
were applied.
bash_scripts: These shells scripts can be used to reproduce the experiments from our work. See below.configs: We utilize hydra config files to organize experiments.config.yamlThis config is the entry point for launching training experiments.eval_config.yamlThis config is the entry point for evaluations.
scripts: The main training and evaluation scriptsscripts/composer_scripts/train_discrete_denoiser.py: This script is the main training entry point.scripts/evals: These scripts run the evaluation for the translation, summarization, and math reasoning datasets, as well as any likelihood evaluation.
src:src/denoiser: During training, denoisers take in "noisy" inputs and predict clean signals. At inference, starting from a purely noisy signal, through iterative denoising, these classes produce samples that resemble data.AR: We can view autoregressive models within this paradigm. Noise is applied by masking tokens one at a time from right-to-left. Denoising is done one token at a time, left-to-right.Diffusion: We implement masked diffusion models:MDLM: Standard masked diffusion.BD3LM: Block diffusion models.E2D2: Our encoder-decoder implementation.
src/backbone: These are the underlying neural networks the take in noisy inputs and produce logits. Each denoiser is parameterized by a backbone. The denoiser can optionally, post-process the logit outputs of the backbone to produce log-probs over the clean sequence.
The shell scripts provided in bash_scripts can be used to reproduce
the training and evaluations from our work.
- For training, the files follow a convention where the dataset and denoiser class are
specified.
For example, to train the fine-tuning E2D2 model on the GSM8K dataset, use the following
shell script:
run_train_e2d2_gsm8k.sh. - Once models have been trained, the provided evaluation scripts can be used to reproduce
tables and figures from our work.
For example, to evaluate models trained on the WMT translation dataset, use the
following shell script:
run_seq2seq_eval_wmt.sh. In that file, and similar ones for other evaluations, specify the path to the saved checkpoints, and uncomment the relevant section for a given denoiser class. We also provide scripts that will produce the generation throughput numbers we report. These files contain a_tputat the end of the script name.
Below are the evaluation scripts provided for various tasks:
- Text summarization:
run_seq2seq_eval_cnndm.sh,run_seq2seq_eval_cnndm_tput.sh - Machine translation:
run_seq2seq_eval_wmt.sh,run_seq2seq_eval_wmt_tput.sh. - Mathematical reasoning:
run_lm_eval_harness.sh,run_lm_eval_harness_tput.sh,run_likelihood_eval_gsm8k.sh - Likelihood estimation (trained on OpenWebText):
run_likelihood_eval_owt.sh
We release the following models on HuggingFace:
- 80M E2D2 for text summarization (trained from scratch):
kuleshov-group/e2d2-cnndm - 250M E2D2 for machine translation (trained from scratch):
kuleshov-group/e2d2-wmt - 1.7B E2D2 for mathematical reasoning (fine-tuned from Qwen3):
kuleshov-group/e2d2-gsm8k-finetune-Qwen3-2B - 170M E2D2 trained on OpenWebText (trained from scratch):
kuleshov-group/e2d2-owt
To use these models, follow the snippet below:
from transformers import AutoModelForMaskedLM
# model_config_overrides = {} # Use this to optionally override config parameters
model = AutoModelForMaskedLM.from_pretrained(
"kuleshov-group/e2d2-cnndm", # Use one of the repos from above
trust_remote_code=True,
# **model_config_overrides,
)These models can also be used in the evaluation scripts by setting
pretrained_model_name_or_path= to one of the options above.
@inproceedings{
arriola2025e2d2,
title={Encoder-Decoder Diffusion Language Models for Efficient Training and Inference},
author={Marianne Arriola and Yair Schiff and Hao Phung and Aaron Gokaslan and Volodymyr Kuleshov},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={TODO}
}