Siwei Xia
Li Sun
Tiantian Sun
Qingli Li
East China Normal University
- Release benchmark evaluation code
- Release Gradio user interface
- Update readme for detailed usage guide
- Release paper on arXiv
To set up the environment, run:
conda env create -f environment.yaml
conda activate dragloraTo use DragLoRA with your own images:
- Launch the interface:
python drag_ui.py-
Follow these steps:
- Upload your image to the left-most box
- (Optional) Add a descriptive prompt below the image
- Click "Train RecLoRA" to optimize for identity preservation
- (Optional) Draw a mask to specify editable regions
- Set handle-target points in the middle box:
- Click to place a handle point
- Click to place its target point
- Repeat for additional point pairs as needed
- Click "Run" to process the image
-
Output and Storage:
- Results appear in the right-most box
- Temporary files are saved in "lora_tmp" (overwritten for each new image)
- Input images, prompts, masks, points, and outputs are saved with unique names (e.g. 2025-05-17-0105-05) in "lora_tmp"
To evaluate the algorithm using benchmark data:
- Navigate to the evaluation directory:
cd drag_bench_evaluation-
Download and extract DragBench to "drag_bench_data"
-
Train reconstruction LoRA:
python run_lora_training.pyResults will be saved in "drag_bench_lora"
- Run DragLoRA:
python run_dragbench_draglora.pyResults will be saved in "drag_results"
- Evaluate performance:
- Point matching accuracy (MD and m-MD):
python run_eval_point_matching.py --eval_root drag_results
- Image consistency (LPIPS, CLIP, MSE):
python run_eval_similarity.py --eval_root drag_results
To simultaneously measure editability and consistency through two symmetric editing operations, first drag once following above instructions. After that, drag twice with:
- Train reconstruction LoRA:
python run_lora_training.py --img_path drag_resultsResults will be saved in "drag_bench_lora_for_drag_results"
- Run DragLoRA:
python run_dragbench_draglora.py \
--img_dir drag_results \
--lora_dir drag_bench_lora_for_drag_results \
--save_dir drag_back_resultsResults will be saved in "drag_back_results"
- Evaluate performance by compare the similarity between the drag-back images and original images:
python run_eval_similarity.py --eval_root drag_back_resultsIf you find our work useful, please cite our paper:
@inproceedings{xia2025draglora,
title={DragLoRA: Online Optimization of LoRA Adapters for Drag-based Image Editing in Diffusion Model},
author={Xia, Siwei and Sun, Li and Sun, Tiantian and Li, Qingli},
booktitle={The International Conference on Machine Learning (ICML)},
year={2025}
}This work builds upon DragDiffusion. We thank the authors and all contributors to the open-source diffusion models and libraries.