The Visual Design-to-Production System is an end-to-end, AI-powered system that transforms UI mockups, wireframes, or screenshots into production-ready, responsive, and accessible frontend code.
This project leverages a sophisticated multi-agent architecture built with Google's Agent Development Kit (ADK) and powered by Google's Gemini model family to automate the entire designer-to-developer handoff process.
Note: This project was developed as part of the Google AI Sprint H2, 2025. We gratefully acknowledge Google for providing the Google Cloud credits that made this development possible.
- Automated UI/UX Implementation: Drastically reduce the time it takes to translate visual designs into functional code.
- Rapid Prototyping: Go from a rough wireframe or even a sketch to an interactive prototype in minutes.
- Design System Enforcement: The system analyzes design tokens (colors, typography, spacing) and ensures the generated code adheres to them.
- Accessibility First: Automatically generates code that follows WCAG 2.1 AA compliance, including proper semantic HTML and ARIA attributes.
- Responsive by Default: The architecture planning pipeline defines responsive breakpoints, ensuring the generated UI works across all screen sizes.
- Framework Agnostic: Easily configurable to generate code for popular frontend frameworks like React, Vue, Angular, or vanilla HTML/CSS.
The project includes an interactive web interface built with Streamlit. You can upload your design images directly and see the agent generate code in real-time.
(Note: This is a representative image of the UI you can run locally.)
To run the UI:
- Install dependencies:
pip install -r design_to_production/deployment/requirements.txt - Run the app:
streamlit run design_to_production/ui/app.py
The system uses a "chain of agents" approach, where a coordinator delegates tasks to specialized pipelines. This makes the process modular, robust, and easy to debug.
- Coordinator Agent: The central orchestrator that manages the entire workflow.
- Design Analysis Pipeline:
Image Processor: Normalizes the input image.Token Extractor: Uses Gemini Vision to extract design tokens (colors, fonts, etc.).Pattern Identifier: Uses Gemini Vision to identify UI components like navbars, cards, and forms.
- Architecture Planning Pipeline:
Component Architect: Plans a component hierarchy based on Atomic Design principles.Responsive Planner: Defines breakpoints and responsive behavior.State Planner: Recommends a state management strategy.
- Code Generation Pipeline:
Component Generator: Uses Gemini to write the actual component code (e.g., React, Vue) based on the plan and tokens.Style Generator: Writes the corresponding CSS, Tailwind, or styled-components code.Test Generator: Creates unit tests for the components.
- Validation Pipeline:
- Runs concurrent checks for accessibility, responsiveness, and design fidelity.
For a complete technical breakdown of the agent architecture, see AGENTS.md.
Prerequisites:
- Python 3.11+
pipandvenv(or your preferred virtual environment tool)- An active Google Cloud project with the Vertex AI API enabled (optional, for Vertex AI deployment)
1. Clone the repository:
git clone <your-repo-url>
cd <your-repo-name>2. Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate3. Install dependencies:
pip install -r design_to_production/deployment/requirements.txt4. Configure your environment:
- Copy the
.env.examplefile to a new file named.env:cp .env.example .env
- Open the
.envfile and add yourGOOGLE_API_KEY. - If you plan to use Figma, add your
FIGMA_ACCESS_TOKEN.
There are two ways to run the system:
1. Interactive Web UI (Recommended):
streamlit run design_to_production/ui/app.pyNavigate to the URL provided by Streamlit (usually http://localhost:8501) in your browser. You can drag and drop design images directly into the UI.
2. Interactive CLI:
python run.pyThis will start a command-line chat session. You can provide text prompts or paste the full path to a local image file.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines on how to submit pull requests.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
