BiSMO is a PyTorch-based framework for bilevel optimization in lithography applications, specifically for Source Mask Optimization (SMO). It provides a modular and efficient approach to solve the complex optimization problems in lithography by leveraging bilevel optimization techniques.
# Clone the repository
git clone https://github.com/yourusername/BiSMO.git
cd BiSMO
# Create and activate a conda environment (recommended)
conda create -n smo python=3.8
conda activate smo
# Install dependencies
pip install -r requirements/requirements.txt
BiSMO supports multiple bilevel optimization approaches:
- DARTS: Differentiable Architecture Search method
- NMN: Neumann Series method
- CG: Conjugate Gradient method
- RL: Reinforcement Learning approach
To run an experiment with one of these approaches:
# Using DARTS approach
./darts_d2.sh
# Using Neumann Series approach
./nmn_d1.sh
# Using Conjugate Gradient approach
./cg_d0.sh
# Using Reinforcement Learning approach
./rl_d1.sh
The project is organized as follows:
-
src/
: Main source codebilevel.py
: Entry point for bilevel optimizationbetty/
: Betty library integrationmodels/
: Neural network modelsmo_module.py
: Mask Optimization moduleso_module.py
: Source Optimization module
problems/
: Optimization problemsmo.py
: Mask Optimization problemso.py
: Source Optimization problem
engine/
: Optimization enginesdata/
: Data handling utilities
-
configs/
: Configuration filesproblems/
: Configuration for different optimization approachesmodule/
: Model configurationsengine/
: Engine configurations
-
Shell scripts (e.g.,
darts_d2.sh
,nmn_d1.sh
, etc.): For running experiments with different configurations
BiSMO uses Hydra for configuration management. Main configuration options include:
problem_type
: Optimization approach (darts
,nmn
,cg
,rl
)unroll_steps
: Number of unrolling steps in optimizationdevice_id
: GPU device to use- Weights for different loss components
- Learning rates and other hyperparameters
The provided shell scripts run optimization on different image masks with various approaches:
# Run optimization on 10 masks using DARTS approach on GPU 2
./darts_d2.sh
# Run with different unrolling steps (e.g., 3 steps) using Neumann Series
./nmn_d1_unroll3_iter.sh
# Run with different Conjugate Gradient iterations
./cg_d0_unroll3_iter.sh
You can modify the configuration files in configs/
to customize:
- Optimization approaches and hyperparameters
- Model architectures
- Input data sources and masks
- Training and validation settings