This repo contains code accompaning the paper, DFWLayer: Differentiable Frank-Wolfe Optimization Layer. DFWLayer is a differentiable optimization layer which accelerates both the optimization and backprogation procedure with norm constraints.
pip install -r requirements.txt
We test the efficiency (running time) and accuracy (simularity and distance) for different-scale optimization problems.
cd DFWLayer/numerical_experiment
python test_time_for_norms.py
The problem size can be changed by modifying n=100 in test_time_for_norms.py.
We evaluate the performance of differentiable optimization layers for robotics tasks under imitation learning.
- The expert demonstrations are saved in
DFWLayer/robotics/expert_data. We provide expert demonstartions for R+O03 and R+O10. - For example, we train policy for R+O03 with DFWLayer.
The task and layer class can be changed by modifying arguments
cd DFWLayer/robotics python train_policy.py --cost_type R+O03 --opt_layer_class dfw_layer --device cuda--cost_typeand--opt_layer_classrespectively.