Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[CVPR 2025] Official Implementation of LoRACLR: Contrastive Adaptation for Customization of Diffusion Models

License

Notifications You must be signed in to change notification settings

enisimsar/LoRACLR

Repository files navigation

LoRACLR: Contrastive Adaptation for Customization of Diffusion Models [CVPR 2025]

Static Badge Static Badge

Enis Simsar, Thomas Hofmann, Federico Tombari, Pinar Yanardag

Recent advances in text-to-image customization have enabled high-fidelity, context-rich generation of personalized images, allowing specific concepts to appear in a variety of scenarios. However, current methods struggle with combining multiple personalized models, often leading to attribute entanglement or requiring separate training to preserve concept distinctiveness. We present LoRACLR, a novel approach for multi-concept image generation that merges multiple LoRA models, each fine-tuned for a distinct concept, into a single, unified model without additional individual fine-tuning. LoRACLR uses a contrastive objective to align and merge the weight spaces of these models, ensuring compatibility while minimizing interference. By enforcing distinct yet cohesive representations for each concept, LoRACLR enables efficient, scalable model composition for high-quality, multi-concept image synthesis. Our results highlight the effectiveness of LoRACLR in accurately merging multiple concepts, advancing the capabilities of personalized image generation.

Dependencies and Installation

  • Python >= 3.10 (Recommend to use Anaconda or Miniconda)
  • Diffusers==0.19.3
  • XFormer (is recommend to save memory). Run pip install xformers

Single-Client Concept Tuning

Merging LoRAs

Some trained models can be obtained from Orthogonal Adaptation reposityory, put the trained models in experiments/ folder.

Step 1: Collect Concept Models

Collect your concept models and update config.json accordingly.

[
    {
        "lora_path": "experiments/single-concept/elsa/models/edlora_model-latest.pth",
        "unet_alpha": 1.5,
        "text_encoder_alpha": 1.5,
        "concept_name": "<elsa1> <elsa2>"
    },
    {
        "lora_path": "experiments/single-concept/moana/models/edlora_model-latest.pth",
        "unet_alpha": 1.5,
        "text_encoder_alpha": 1.5,
        "concept_name": "<moana1> <moana2>"
    }
    ... # keep adding new concepts for extending the pretrained models
]

Step 2: Weight Fusion

Run the following command:

python weight_fusion.py \
    --concept_cfg config.json \
    --save_path ./experiments/multi-concepts \
    --pretrained_model nitrosocke/mo-di-diffusion

Step 3: Sample

Use inference.ipynb notebook.

Citation

If you find our work useful, please consider citing our paper:

@inproceedings{simsar2025loraclr,
  title={LoRACLR: Contrastive Adaptation for Customization of Diffusion Models},
  author={Simsar, Enis and Hofmann, Thomas and Tombari, Federico and Yanardag, Pinar},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={13189--13198},
  year={2025}
}

Acknowledgment

This project builds upon the structure and pretrained weights from the Ortha repository by ujin-song. We thank the authors for making their work publicly available.

About

[CVPR 2025] Official Implementation of LoRACLR: Contrastive Adaptation for Customization of Diffusion Models

Resources

License

Stars

Watchers

Forks