Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
StableGen is an open-source Blender plugin designed to seamlessly integrate advanced diffusion models (SDXL, FLUX.1-dev) into your creative process. Generate complex, coherent, and controllable textures for your 3D models and entire scenes using a flexible ComfyUI backend.
StableGen empowers 3D artists by bringing cutting-edge AI texturing capabilities into Blender:
- π Scene-Wide Multi-Mesh Texturing:
- Don't just texture one mesh at a time! StableGen is designed to apply textures to all mesh objects in your scene simultaneously from your defined camera viewpoints. Alternatively, you can choose to texture only selected objects.
- Achieve a cohesive look across entire environments or collections of assets in a single generation pass.
- Ideal for concept art, look development for complex scenes, and batch-texturing asset libraries.
- π¨ Multi-View Consistency:
- Sequential Mode: Generates textures viewpoint by viewpoint on each mesh, using inpainting and visibility masks for high consistency across complex surfaces.
- Grid Mode: Processes multiple viewpoints for all meshes simultaneously for faster previews. Includes an optional refinement pass.
- Sophisticated weighted blending ensures smooth transitions between views.
- π Precise Geometric Control with ControlNet:
- Leverage multiple ControlNet units (Depth, Canny, Normal) simultaneously to ensure generated textures respect your model's geometry.
- Fine-tune strength, start/end steps for each ControlNet unit.
- Supports custom ControlNet model mapping.
- ποΈ Powerful Style Guidance with IPAdapter:
- Use external reference images to guide the style, mood, and content of your textures with IPAdapter.
- Employ IPAdapter without an reference image for enhanced consistency in multi-view generation modes.
- Control IPAdapter strength, weight type, and active steps.
- βοΈ Flexible ComfyUI Backend:
- Connects to your existing ComfyUI installation, allowing you to use your preferred SDXL checkpoints and custom LoRAs. Experimental support for FLUX.1-dev.
- Offloads heavy computation to the ComfyUI server, keeping Blender mostly responsive.
- β¨ Advanced Inpainting & Refinement:
- Refine Mode (Img2Img): Re-style, enhance, or add detail to existing textures (StableGen generated or otherwise) using an image-to-image process. Choose to preserve original textures for localized refinement.
- UV Inpaint Mode: Intelligently fills untextured areas directly on your model's UV map using surrounding texture context.
- π οΈ Integrated Workflow Tools:
- Camera Setup: Quickly add and arrange multiple cameras around your subject.
- View-Specific Prompts: Assign unique text prompts to individual camera viewpoints for targeted details.
- Texture Baking: Convert complex procedural StableGen materials into standard UV image textures.
- HDRI Setup, Modifier Application, Curve Conversion, GIF/MP4 Export & Reproject.
- π Preset System:
- Get started quickly with built-in presets for common scenarios (e.g., "Default", "Characters", "Quick Draft").
- Save and manage your own custom parameter configurations for repeatable workflows.
See what StableGen can do!
This showcase demonstrates how StableGen can texture the model using a standard prompt and then with style guidance from an IPAdapter image reference.
3D Model Source: "Brown" by ucupumar - Available at: BlendSwap (Blend #15262)
Reference: "The Starry Night" by Vincent van Gogh (used to guide the "Artistic Style" variant)
This showcase demonstrates how StableGen can texture a car model using different textual prompts implying various visual styles.
3D Model Source: "Pontiac GTO 67" by thecali - Available at: BlendSwap (Blend #13575)
| Untextured ModelΒ | Generated | Generated | Generated |
|---|---|---|---|
| Β |
|||
| Base Untextured Model | Prompt: "green car"Β Β | Β Β Prompt: "steampunk style car"Β | Β Β Prompt: "stealth black car"Β Β Β |
This showcase demonstrates how StableGen can texture a more complex scene consisting of many mesh objects.
3D Model Source: "Subway Station Entrance" by argonius - Available at: BlendSwap (Blend #19305)
StableGen acts as an intuitive interface within Blender that communicates with a ComfyUI backend.
- You set up your scene and parameters in the StableGen panel.
- StableGen prepares necessary data (like ControlNet inputs from camera views).
- It constructs a workflow and sends it to your ComfyUI server.
- ComfyUI processes the request using your selected diffusion models.
- Generated images are sent back to Blender.
- StableGen applies these images as textures to your models using sophisticated projection and blending techniques.
- Blender: Version 4.2 or newer.
- Operating System: Windows 10/11 or Linux.
- GPU: NVIDIA GPU with CUDA is recommended for ComfyUI. For further details, check ComfyUI's github page: https://github.com/comfyanonymous/ComfyUI.
- ComfyUI: A working installation of ComfyUI. StableGen uses this as its backend.
- Python: Version 3.x (usually comes with Blender, but Python 3 is needed for the
installer.pyscript). - Git: Required by the
installer.pyscript. - Disk Space: Significant free space for ComfyUI, AI models (10GB to 50GB+), and generated textures.
Setting up StableGen involves installing ComfyUI, then StableGen's dependencies into ComfyUI using our installer script, and finally installing the StableGen plugin in Blender.
StableGen relies on a working ComfyUI installation as its backend. This can be done on a separate machine if desired.
If you wish to use a separate machine for the backend, do step 1 and 2 there.
- If you don't have ComfyUI, please follow the official ComfyUI installation guide: https://github.com/comfyanonymous/ComfyUI#installing.
- Install ComfyUI in a dedicated directory. We'll refer to this as
<YourComfyUIDirectory>. - Ensure you can run ComfyUI and it's functioning correctly before proceeding.
- Install ComfyUI in a dedicated directory. We'll refer to this as
The installer.py script (found in this repository) automates the download and placement of required ComfyUI custom nodes and core AI models into your <YourComfyUIDirectory>.
Prerequisites for the installer:
- Python 3.
- Git installed and accessible in your system's PATH.
- The path to your ComfyUI installation (
<YourComfyUIDirectory>). - Required Python packages for the script:
requestsandtqdm. Install them via pip:pip install requests tqdm
Running the Installer:
- Download/Locate the Installer: Get
installer.pyfrom this GitHub repository. - Execute the Script:
- Open your system's terminal or command prompt.
- Navigate to the directory containing
installer.py. - Run the script:
Replace
python installer.py <YourComfyUIDirectory>
<YourComfyUIDirectory>with the actual path. If omitted, the script will prompt for it.
- Follow On-Screen Instructions:
- The script will display a menu of installation packages (Minimal, Essential, Recommended, Complete SDXL). Choose one based on your needs.
- It will download and place files into the correct subdirectories of
<YourComfyUIDirectory>.
- Restart ComfyUI: If ComfyUI was running, restart it to load new custom nodes.
(For manual dependency installation (and for installation of FLUX.1-dev and its dependecies), see docs/MANUAL_INSTALLATION.md.)
- Go to the Releases page of this repository.
- Download the latest
StableGen.zipfile. - In Blender, go to
Edit > Preferences > Add-ons > Install.... - Navigate to and select the downloaded
StableGen.zipfile. - Enable the "StableGen" addon (search for "StableGen" and check the box).
- In Blender, go to
Edit > Preferences > Add-ons. - Find "StableGen" and expand its preferences.
- Set the following paths:
- Output Directory: Choose a folder where StableGen will save generated images.
- Server Address: Ensure this matches your ComfyUI server (default
127.0.0.1:8188). - Review ControlNet Mapping if using custom named ControlNet models.
- Enable online access in Blender if not enabled already. Select
Edit -> Preferencesfrom the topbar of Blender. Then navigate toSystem -> Networkand check the boxEnable Online Access. While StableGen does not require internet access, this is added to respect Blender add-on guidelines, as there are still network calls being made locally.
Hereβs how to get your first texture generated with StableGen:
- Start ComfyUI Server: Make sure it's running in the background.
- Open Blender & Prepare Scene:
- Have a mesh object ready (e.g., the default Cube).
- Ensure the StableGen addon is enabled and configured (see Step 4 above).
- Access StableGen Panel: Press
Nin the 3D Viewport, go to the "StableGen" tab. - Add Cameras (Recommended for Multi-View):
- Select your object.
- In the StableGen panel, click "Add Cameras". Choose
Objectas center type. Adjust interactively if needed, then confirm.
- Set Basic Parameters:
- Prompt: Type a description (e.g., "ancient stone wall with moss").
- Checkpoint: Select a checkpoint (e.g.,
sdxl_base_1.0). - Generation Mode:
Sequentialis a good starting point for consistency.
- Hit Generate! Click the main "Generate" button.
- Observe: Watch the progress in the panel and the ComfyUI console. Your object should update with the new texture! Output files will be in your specified "Output Directory".
- By default, the generated texture will only be visible in the Rendered viewport shading mode (CYCLES Render Engine).
StableGen provides a comprehensive interface for controlling your AI texturing process, from initial setup to final output. Here's an overview of the main sections and tools available in the StableGen panel:
These are the main operational buttons and initial setup tools, generally found near the top of the StableGen panel:
- Generate / Cancel Generation (Main Button): This is the primary button to start the AI texture generation process for meshe objects based on your current settings. It communicates with the ComfyUI backend. While processing, the button changes to "Cancel Generation," allowing you to stop the current task. Progress bars will appear below this button during generation.
- Bake Textures: Converts the dynamic, multi-projection material StableGen creates on your meshes into a single, standard UV-mapped image texture per object. This is essential for exporting or simplifying scenes. You can set the resolution and UV unwrapping method for the bake. This option is crucial for finalizing your AI-generated textures into a portable format.
- Add Cameras: Helps you quickly set up multiple viewpoints. It creates a circular array of Blender cameras around the active object (if "Object" center type is chosen) or the current 3D view center. You can specify the number of cameras and interactively adjust their positions before finalizing.
- Collect Camera Prompts: Cycles through all cameras in your scene, allowing you to type a specific descriptive text prompt for each viewpoint (e.g., "front view," "close-up on face"). These per-camera prompts are used in conjunction with the main prompt if
Use camera promptsis enabled inViewpoint Blending Settings.
- Located prominently in the UI, this system allows you to:
- Select a Preset: Choose from built-in configurations (e.g.,
Default,Characters,Quick Draft) for common scenarios or selectCustomto use your current settings. - Apply Preset: If you modify a stock preset, this button applies its original values.
- Save Preset: When your settings are
Custom, this allows you to save your current configuration as a new named preset. - Delete Preset: Removes a selected custom preset.
- Select a Preset: Choose from built-in configurations (e.g.,
These are your primary controls for defining the generation:
- Prompt: The main text description of the texture you want to generate.
- Checkpoint: Select the base SDXL checkpoint.
- Architecture: Choose between
SDXLandFlux 1(experimental) model architectures. - Generation Mode: Defines the core strategy for texturing:
Generate Separately: Each viewpoint generates independently.Generate Sequentially: Viewpoints generate one by one, using inpainting from previous views for consistency.Generate Using Grid: Combines all views into a grid for a single generation pass, with an optional refinement step.Refine/Restyle Texture (Img2Img): Uses the current texture as input for an image-to-image process.UV Inpaint Missing Areas: Fills untextured areas on a UV map via inpainting.
- Target Objects: Choose whether to texture all visible mesh objects or only selected ones.
Click the arrow next to each title to expand and access detailed settings:
- Core Generation Settings: Control diffusion basics like Seed, Steps, CFG, Negative Prompt, Sampler, Scheduler and Clip Skip.
- LoRA Management: Add and configure LoRAs (Low-Rank Adaptation) for additional style or content guidance. You can set the model and clip strength for each LoRA.
- Viewpoint Blending Settings: Manage how textures from different camera views are combined, including camera-specific prompts, discard angles, and blending weight exponents.
- Output & Material Settings: Define fallback color, material properties (BSDF), automatic resolution scaling, and options for baking textures during generation which enables generating with more than 8 viewpoints.
- Image Guidance (IPAdapter & ControlNet): Configure IPAdapter for style transfer using external images and set up multiple ControlNet units (Depth, Canny, etc.) for precise structural control.
- Inpainting Options: Fine-tune masking and blending for
SequentialandUV Inpaintmodes (e.g., differential diffusion, mask blurring/growing). - Generation Mode Specifics: Parameters unique to the selected Generation Mode, like refinement options for Grid mode or IPAdapter consistency settings for Sequential/Separate/Refine modes.
A collection of utilities to further support your texturing workflow:
- Switch Material: For selected objects with multiple material slots, this tool allows you to quickly set a material at a specific index as the active one.
- Add HDRI Light: Prompts for an HDRI image file and sets it up as the world lighting, providing realistic illumination for your scene.
- Apply All Modifiers: Iterates through all mesh objects in the scene, applies their modifier stacks, and converts geometry instances (like particle systems or collection instances) into real mesh data. This helps prepare models for texturing.
- Convert Curves to Mesh: Converts any selected curve objects into mesh objects, which is necessary before StableGen can texture them.
- Export GIF/MP4: Creates an animated GIF and MP4 video of the currently active object, with the camera ing around it. Useful for quickly showcasing your textured model. You can set duration, frame rate, and resolution.
- Reproject Images: This operator re-applies previously generated textures to your models using the latest
Viewpoint Blending Settings(e.g.,Discard-Over Angle,Weight Exponent). This allows you to tweak texture blending without full regeneration.
Experiment with these settings and tools to achieve a vast range of effects and control! Remember that the optimal parameters can vary greatly depending on the model, subject matter, and desired artistic style.
StableGen organizes the generated files within the Output Directory specified in your addon preferences. For each generation session, a new timestamped folder is created, helping you keep track of different iterations. The structure for each session (revision) is as follows:
<Output Directory>/<SceneName>/(Based on your.blendfile name, or scene name if unsaved)<YYYY-MM-DDTHH-MM-SS>/(Timestamp of generation start - this is the main revision directory)generated/(Main output textures from each camera/viewpoint before being applied or baked)controlnet/(Intermediate ControlNet input images)depth/(Depth pass renders)canny/(Renders processed using Canny edge decetor)normal/(Normal pass renders)
baked/(Textures baked onto UV maps using the standaloneBake Texturestool)generated_baked/(Textures baked as part of the generation process if "Bake Textures While Generating" is enabled)inpaint/(Files related to inpainting processes, e.g., forSequential mode)render/(Renders of previous state used as context for inpainting)visibility/(Visibility masks used as masks during the inpainting)
uv_inpaint/(Files specific to the UV Inpaint mode)uv_visibility/(Visibility masks generated on UVs for UV inpainting)
misc/(Other temporary or miscellaneous files, e.g., renders made for Canny edge detection input).gif/.mp4(If theExport GIF/MP4tool is used, these files are saved directly into the timestamped revision directory)prompt.json(The last generated workflow to be used in ComfyUI)
Encountering issues? Here are some common fixes. Always check the Blender System Console (Window > Toggle System Console) AND the ComfyUI server console for error messages.
- StableGen Panel Not Showing: Ensure the addon is installed and enabled in Blender's preferences.
- "Cannot generate..." on Generate Button: Check Addon Preferences:
Output DirectoryandServer Addressmust be correctly set. The server also has to be reachable. - Connection Issues with ComfyUI:
- Make sure your ComfyUI server is running.
- Verify the
Server Addressin StableGen preferences. - Check firewall settings.
- Models Not Found (Error in ComfyUI Console):
- Run the
installer.pyscript. - Manually ensure models are in the correct subfolders of
<YourComfyUIDirectory>/models/(e.g.,checkpoints/,controlnet/,loras/,ipadapter/,clip_vision/). - Restart ComfyUI after adding new models or custom nodes.
- Run the
- GPU Out Of Memory (OOM):
- Enable
Auto Rescale ResolutioninAdvanced Parameters>Output & Material Settingsif disabled. - Try lower bake resolutions if baking.
- Close other GPU-intensive applications.
- Enable
- Textures not visible after generation completes:
- Switch to Rendered viewport shading (top right corner, fourth "sphere" icon)
- Textures not affected by your lighting setup:
- Enable
Apply BSDFinAdvanced Parameters > Output & Material Settingsand regenerate.
- Enable
- Poor Texture Quality/Artifacts:
- Try using the provided presets.
- Adjust prompts and negative prompts.
- Experiment with different Generation Modes.
Sequentialwith IPAdapter is often good for consistency. - Ensure adequate camera coverage and appropriate
Discard-Over Angle. - Fine-tune ControlNet strength. Too low might ignore geometry; too high might yield flat results.
- For
Sequentialmode, check inpainting and visibility mask settings.
- All Visible Meshes Textured: StableGen textures all visible mesh objects by default. You can set
Target ObjectstoSelectedto only texture selected objects.
We welcome contributions! Whether it's bug reports, feature suggestions, code contributions, or new presets, please feel free to open an issue or a pull request.
StableGen is released under the GNU General Public License v3.0. See the LICENSE file for details.
StableGen builds upon the fantastic work of many individuals and communities. Our sincere thanks go to:
- Academic Roots: This plugin originated as a Bachelor's Thesis by OndΕej Sakala at the Czech Technical University in Prague (Faculty of Information Technology), supervised by Ing. Radek Richtr, Ph.D.
- Full thesis available at: https://dspace.cvut.cz/handle/10467/123567
- Core Technologies & Communities:
- ComfyUI by ComfyAnonymous (GitHub) for the powerful and flexible backend.
- The Blender Foundation and its community for the amazing open-source 3D creation suite.
- Inspired by following Blender Addons:
- Pioneering Research: We are indebted to the researchers behind key advancements that power StableGen. The following list highlights some of the foundational and influential works in diffusion models, AI-driven control, and 3D texturing (links to arXiv pre-prints):
- Diffusion Models:
- Ho et al. (2020), Denoising Diffusion Probabilistic Models - 2006.11239
- Rombach et al. (2022), Latent Diffusion Models (Stable Diffusion) - 2112.10752
- AI Control Mechanisms:
- Zhang et al. (2023), ControlNet - 2302.05543
- Ye et al. (2023), IP-Adapter - 2308.06721
- Key 3D Texture Synthesis Papers:
- Chen et al. (2023), Text2Tex - 2303.11396
- Richardson et al. (2023), TEXTure - 2302.01721
- Zeng et al. (2023), Paint3D - 2312.13913
- Le et al. (2024), EucliDreamer - 2311.15573
- Ceylan et al. (2024), MatAtlas - 2404.02899
- Other Influential Works:
- Siddiqui et al. (2022), Texturify - 2204.02411
- Bokhovkin et al. (2023), Mesh2Tex - 2304.05868
- Levin & Fried (2024), Differential Diffusion - 2306.00950
- Diffusion Models:
The open spirit of the AI and open-source communities is what makes projects like StableGen possible.
Here are some features we plan to implement in the future (in no particular order):
- Advanced IPAdapter support: Support for custom IPAdapter models, support for advanced IPAdapter parameters.
- Upscaling: Support for upscaling generated textures.
- SD 3.5 Support: Support for the Stable Diffusion 3.5 architecture.
- Automatic camera placement improvements: More advanced camera placement algorithms (e.g., based on model geometry).
- Mesh generation: Integration of mesh generation capabilities.
If you have any suggestions, please feel free to open an issue!
OndΕej Sakala
- Email:
[email protected] - X/Twitter:
@sakalond
Last Updated: October 25, 2025