An enhanced command-line AI workflow tool forked from Google Gemini CLI with improved reliability, model fallback mechanisms, and flexible configuration options.
- π Automatic Model Fallback: When your primary model is unavailable (quota exhausted, streaming errors), automatically switches to backup models
- βοΈ Flexible Configuration: Easy-to-use
.envfile configuration with environment variable support - π Multiple Auth Methods: Support for both OpenAI-compatible APIs and Google Gemini API
- π‘οΈ Intelligent Error Handling: Recognizes and handles model exhaustion, rate limits, and streaming failures
- Query and edit large codebases in and beyond Gemini's 1M token context window
- Generate new apps from PDFs or sketches using multimodal capabilities
- Automate operational tasks like querying pull requests or handling complex rebases
- Use tools and MCP servers to connect new capabilities
- Ground your queries with Google Search integration
- Node.js version 18+ installed
Option 1: Install globally
npm install -g @rv192/gem-cli
genOption 2: Run directly
npx https://github.com/rv192/gen-cliCreate a .env file in your project root or home directory:
# OpenAI-compatible API (Recommended)
OPENAI_BASE_URL=https://your-api-endpoint.com
OPENAI_API_KEY=your-api-key
DEFAULT_MODEL=gemini-2.5-pro
FALLBACK_MODELS=gemini-2.5-flash,gemini-1.5-pro,gemini-2.0-flash
# Or use Google Gemini API directly
GEMINI_API_KEY=your-gemini-api-keyPerfect for using with SiliconFlow, OpenRouter, or other OpenAI-compatible services:
export OPENAI_BASE_URL="https://api.siliconflow.cn/v1"
export OPENAI_API_KEY="your-api-key"For direct Google Gemini API access:
- Generate a key from Google AI Studio
- Set the environment variable:
export GEMINI_API_KEY="your-api-key"
Once configured, start the CLI and begin interacting:
<<<<<<< HEAD
gen======= 3. Pick a color theme 4. Authenticate: When prompted, sign in with your personal Google account. This will grant you up to 60 model requests per minute and 1,000 model requests per day using Gemini.
You are now ready to use the Gemini CLI!
The Gemini API provides a free tier with 100 requests per day using Gemini 2.5 Pro, control over which model you use, and access to higher rate limits (with a paid plan):
-
Generate a key from Google AI Studio.
-
Set it as an environment variable in your terminal. Replace
YOUR_API_KEYwith your generated key.export GEMINI_API_KEY="YOUR_API_KEY"
-
(Optionally) Upgrade your Gemini API project to a paid plan on the API key page (will automatically unlock Tier 1 rate limits)
The Vertex AI provides free tier using express mode for Gemini 2.5 Pro, control over which model you use, and access to higher rate limits with a billing account:
-
Generate a key from Google Cloud.
-
Set it as an environment variable in your terminal. Replace
YOUR_API_KEYwith your generated key and set GOOGLE_GENAI_USE_VERTEXAI to trueexport GOOGLE_API_KEY="YOUR_API_KEY" export GOOGLE_GENAI_USE_VERTEXAI=true
-
(Optionally) Add a billing account on your project to get access to higher usage limits
For other authentication methods, including Google Workspace accounts, see the authentication guide.
Once the CLI is running, you can start interacting with Gemini from your shell.
You can start a project from a new directory:
12d231e6408f319a1b3af375b8c2eb8ab3ea5b3b
cd new-project/
gen
> Write me a TODO app in React with Tailwind CSS that can track daily taskscd your-project/
gen
> Analyze this codebase and suggest performance improvements
> Implement a new feature based on GitHub issue #123When your primary model (e.g., gemini-2.5-pro) is unavailable:
Trying model: gemini-2.5-pro
Model gemini-2.5-pro failed: Streaming failed after 3 retries, trying next model...
Trying model: gemini-2.5-flash
β
Successfully connected with gemini-2.5-flash
| Variable | Description | Example |
|---|---|---|
DEFAULT_MODEL |
Primary model to use | gemini-2.5-pro |
FALLBACK_MODELS |
Comma-separated backup models | gemini-2.5-flash,gemini-1.5-pro |
OPENAI_BASE_URL |
API endpoint URL | https://api.siliconflow.cn/v1 |
OPENAI_API_KEY |
API authentication key | your-api-key |
GEMINI_API_KEY |
Google Gemini API key | your-gemini-key |
The CLI selects models in this order:
- Command line
--modelparameter DEFAULT_MODELenvironment variableGEMINI_MODELenvironment variable (legacy)- Built-in default model
When a model fails due to:
- Quota exhaustion
- Rate limiting
- Streaming errors
- Server errors
The CLI automatically tries the next available model from your FALLBACK_MODELS list.
> Describe the main pieces of this system's architecture
> What security mechanisms are in place?
> Implement a first draft for GitHub issue #123
> Help me migrate this codebase to the latest version of Java
> Make me a slide deck showing git history from the last 7 days
> Create a full-screen web app for displaying GitHub issues
> Generate a project status report from recent commits
> Convert all images in this directory to PNG format
> Organize my PDF invoices by month of expenditure
> Analyze log files and summarize error patterns
> Review this pull request and suggest improvements
> Generate unit tests for the selected functions
> Create documentation for this API endpoint
Model not responding:
- Check your API key is valid
- Verify your API endpoint URL
- Ensure you have sufficient quota/credits
"Streaming failed" errors:
- The fallback mechanism should handle this automatically
- Check your
FALLBACK_MODELSconfiguration - Verify backup models are available
Configuration not loading:
- Ensure
.envfile is in the correct location - Check environment variable names are correct
- Restart the CLI after configuration changes
For more help, see the troubleshooting guide.
- CLI Commands - Complete command reference
- Authentication Guide - Detailed auth setup
- Full Documentation - Comprehensive guides
- Contributing - Development and contribution guide
npm uninstall -g @rv192/gem-cliFor detailed uninstallation instructions, see the Uninstall Guide.
This project is forked from Google Gemini CLI. For terms of service and privacy notice, see the Terms of Service and Privacy Notice.
Contributions are welcome! Please read the Contributing Guide for details on our development process.