A powerful Python-based text-to-image generation tool using Stable Diffusion models with optimized performance and an intuitive Gradio interface. Generate high-quality images from text descriptions with advanced customization options.
- Stable Diffusion 1.5
- Stable Diffusion 2.1
- Dreamshaper 8
- Automatic VRAM-based optimization
- xFormers memory efficient attention (when available)
- VAE slicing for better memory usage
- Sequential CPU offload for low VRAM systems
- Dynamic model caching
- Adaptive guidance scaling (7-30 range)
- Quality enhancement prompts
- Comprehensive negative prompts
- Custom seed support
- Adjustable image dimensions (512-1024px)
- Configurable inference steps (20-100)
- Clean, intuitive Gradio interface
- Example prompts gallery
- Real-time generation details
- Advanced settings accordion
- Random prompt generation
- Clear function
Below are sample outputs generated by the Text to Image Generator using various prompts and models:
Note: Output quality and style may vary depending on the selected model and prompt.
- Python 3.13
- CUDA-compatible GPU (recommended)
- CPU-only mode supported
- Open
Text-to-img.ipynbin Google Colab - Select 'Runtime' > 'Change runtime type' > Choose 'T4 GPU'
- Run all cells in sequence
- Obtain your Hugging Face token from Hugging Face.
- Add the token to the Colab notebook by running the following code in a cell:
from huggingface_hub import login login("your_huggingface_token_here")
- Obtain your Hugging Face token from Hugging Face.
- Create a file named
tokeninside thehuggingfacedirectory:echo "your_huggingface_token_here" > huggingface/token
Alternatively, you can securely add your Hugging Face token directly in Google Colab by setting it as a secret. Use the following code snippet in a cell:
HF_TOKEN=YOUR_HUGGINGFACE_TOKENNote: Need Python 3.13 and if you are facing error in setup wheel then run pip install --upgrade pip setuptools wheel and restart the kernel.
git clone https://github.com/SauRavRwT/text-to-img.git
cd text-to-imgWindows:
python -m venv venv
venv\scripts\activateLinux/Mac:
python -m venv venv
source venv/bin/activatepip install -r requirements.txtpython app.py- The system automatically detects available VRAM and applies appropriate optimizations
- For systems with less than 6GB VRAM: Uses sequential CPU offload
- For systems with 6-8GB VRAM: Uses model CPU offload
- For systems with >8GB VRAM: Runs fully on GPU with memory optimizations
- Reduce image dimensions
- Decrease inference steps
- Use a lower resolution model