A workflow for Z-Image-Turbo extending the ComfyUI base workflow with additional features, particularly focused on high-quality image styles and ease of use. The repository includes pre-configured workflows for GGUF and SAFETENSORS formats.
- Style Selector: Choose from fifteen customizable image styles.
- Alternative Sampler Switch: Easily test generation with an alternative sampler.
- Landscape Orientation Switch: Change to horizontal image generation with a single click.
- Z-Image Enhancer (v3.0+): Improves final image quality by performing a double pass.
- Spicy Impact Booster (v3.0+): Adds a subtle spicy condiment to the prompt (fully experimental).
- Smaller Images Switch (v3.0+): Generate smaller images, faster and consuming less VRAM.
- Default image size: 1600 x 1088 pixels
- Smaller image size: 1216 x 832 pixels
- (previous versions up to v2.2 used a fixed setting of 1408 x 944 pixels)
- Preconfigured workflows for each checkpoint format (GGUF / SAFETENSORS).
- Custom sigma values fine-tuned to my personal preference (100% subjective).
- Generated images are saved in the "ZImage" folder, organized by date.
- Incorporates a trick to enable automatic CivitAI prompt detection.
The available styles are organized into workflows based on their focus:
amazing-z-image: The original general-purpose workflow with a variety of image styles.amazing-z-comics: Workflow dedicated to illustration (comics, anime, pixel art, etc.).amazing-z-photo: Workflow dedicated to photographic images (phone, vintage, production photos, etc.).
Each of these workflows comes in two versions, one for GGUF checkpoints and another for SafeTensors.
This is reflected in the filenames:
amazing-z-###_GGUF.json: Recommended for GPUs with 12GB or less VRAM.amazing-z-###_SAFETENSORS.json: Based directly on the ComfyUI example.
When using ComfyUI, you may encounter debates about the best checkpoint format. From my experience, GGUF quantized models provide a better balance between size and prompt response quality compared to SafeTensors versions. However, it's worth noting that ComfyUI includes optimizations that work more efficiently with SafeTensors files, which might make them preferable for some users despite their larger size. The optimal choice depends on factors like your ComfyUI version, PyTorch setup, CUDA configuration, GPU model, and available VRAM and RAM. To help you find the best fit for your system, I've included links to various checkpoint versions below.
These nodes can be installed via ComfyUI-Manager or downloaded from their respective repositories.
- rgthree-comfy: Required for both workflows.
- ComfyUI-GGUF: Required if you are using the workflow preconfigured for GGUF checkpoints.
Using Q5_K_S quants, you will likely achieve the best balance between file size and prompt response.
- z_image_turbo-Q5_K_S.gguf [5.19 GB]
Local Directory:ComfyUI/models/diffusion_models/ - Qwen3-4B.i1-Q5_K_S.gguf [2.82 GB]
Local Directory:ComfyUI/models/text_encoders/ - ae.safetensors [335 MB]
Local Directory:ComfyUI/models/vae/ - 4x_Nickelback_70000G.safetensors (for image enhancer) [66.9 MB]
Local Directory:ComfyUI/models/upscale_models/ - 4x_foolhardy_Remacri.safetensors (for image enhancer) [66.9 MB]
Local Directory:ComfyUI/models/upscale_models/
While it may require 12GB of VRAM or more to run smoothly, ComfyUI's optimizations may allow it to work well on your system.
- z_image_turbo_bf16.safetensors (12.3 GB)
Local Directory:ComfyUI/models/diffusion_models/ - qwen_3_4b.safetensors (8.04 GB)
Local Directory:ComfyUI/models/text_encoders/ - ae.safetensors (335 MB)
Local Directory:ComfyUI/models/vae/ - 4x_Nickelback_70000G.safetensors (for image enhancer) [66.9 MB]
Local Directory:ComfyUI/models/upscale_models/ - 4x_foolhardy_Remacri.safetensors (for image enhancer) [66.9 MB]
Local Directory:ComfyUI/models/upscale_models/
If neither of the two provided versions nor their associated checkpoints perform adequately on your system, you can find links to several alternative checkpoint files below. Feel free to experiment with these options to determine which works best for you.
-
Z-Image-Turbo (GGUF Quantizations) This repository hosts various quantized versions of the
z_image_turbomodel (e.g., Q4_K_S, Q4_K_M, Q3_K_S). While some of these quantizations offer significantly reduced file sizes, this often comes at the expense of final output quality. -
Z-Image-Turbo (FP8 SafeTensors) Similar to the GGUF options, this repository provides two
z_image_turbomodels quantized to FP8 (8-bit floating point) in SafeTensors format. These can serve as replacements for the original SafeTensors model, but in my opinion, they degrade quality quite a bit.
- Qwen3-4B (Various GGUF Quantizations)
This repository offers various quantized versions of the
Qwen3-4Btext encoder in GGUF format (e.g., Q2_K, Q3_K_M). Note: Quantizations beginning with "IQ" might not work, as the GGUF node did not support them during my testing.
This project is licensed under the Unlicense license.
See the "LICENSE" file for details.



