Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Support providing diffusion models and text encoders separately? #352

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
stduhpf opened this issue Aug 20, 2024 · 1 comment
Closed

Support providing diffusion models and text encoders separately? #352

stduhpf opened this issue Aug 20, 2024 · 1 comment

Comments

@stduhpf
Copy link
Contributor

stduhpf commented Aug 20, 2024

With very large open models like SD3 medium and Flux.1 gaining popularity It's becoming comon to provide the diffusion model (unet/diffusion transformer) part of the model and the text encoders separately, since the text encoders can often be reused across different models, to save internet bandwidth and storage space.

I think it would be cool to support these split models here. It could also be a way to use different quantization for different parts of the model.
VAEs can already be provided separately, and this is the same kind of thing.

@SkutteOleg
Copy link
Contributor

#356 seems to have added this functionality

@stduhpf stduhpf closed this as completed Dec 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants