WALL-E is a GitHub bot that supercharges spec-driven development through automated generation of Cloudflare Workers. Based on worker functional requirements and integration tests (Spec File), WALL-E creates corresponding worker code, streamlining the development process.
Install the bot by visiting the GitHub App installation page and follow these steps:
- On the app page, click "Install" in the top-right corner
- Select your organization or personal account where you'd like to install the app.
- Choose "All repositories" or select specific ones where the bot should be active.
- After selecting repositories, click "Install" again to finish.
Once installed, the bot will automatically start working based on your repository configuration.
-
For a new project:
Create a Cloudflare Worker project by running:npm create cloudflare@latest -- your-worker-name
Replace
your-worker-namewith the desired name of your worker. This command initializes a new project in a directory named after your worker. -
For an existing project:
Ensure your project includes the requiredtest/index.spec.tsfile (details below).
Open a pull request that includes your test/index.spec.ts file.
Follow spec file best practices for creating your test/index.spec.ts file.
Activate WALL-E in a pull request by commenting:
/wall-e generate
For more control, use optional parameters:
/wall-e generate path:workers/generate-embeddings provider:openai temperature:0.8
| Parameter | Aliases | Description | Default |
|---|---|---|---|
path |
custom path to a worker dir | repository root | |
provider |
provider for code generation | anthropic | |
model |
model name from the provider | claude-sonnet-4-5-20250929 | |
temperature |
temp |
model temperature setting (0-1) | 0.5 |
fallback |
whether or not you want to use fallback models | true |
anthropicopenaigoogleai
claude-sonnet-4-5-20250929claude-sonnet-4-5-20250929-thinkingclaude-opus-4-5-20251101claude-opus-4-5-20251101-thinkinggpt-4.1o4-mini-2025-04-16o3-pro-2025-06-10gemini-2.5-progemini-2.5-flash
The Improve Feature uses the existing code and spec file to generate optimized code based on the provided feedback. Example:
/wall-e improve path:workers/deduplicated-insert provider:googleai
---
- No need to import "Ai" from `cloudflare:ai` package
- Update "AI" binding to use "Ai" as type
Use this feature when you need to improve generated code with aspects not covered in spec files, such as:
- Fixing imports
- Adjusting types
- Correcting API usage
- Correcting typos
The prompt sent to the LLM consists of 2 sections: instructions and a spec file.
The instructions section of the prompt explains the task and the general environment. It's relatively static and shouldn't change too often.
The Spec File is copied from the head branch and should contain 2 important sections: comments covering all functional requirements and Vitest integration tests covering all input/output interfaces, as well as any business logic-related edge cases.
Please adhere to our best practices when writing your spec files!
Open-source workers generated by WALL-E running in production:
You might probably be convinced that your home-made raviolis are superior to the ones made by the soulless machines, but it's getting hard to compete with the latest-gen LLMs in terms of code quality and efficiency for smaller workers. If you have more complex projects, it's probably a good idea to split them into smaller components anyway.