An interactive AI assistant powered by the open-source Gemini PyTorch implementation. Gemini CLI provides a feature-rich command-line interface with multimodal capabilities, tool integration, and advanced conversation management.
- π€ Interactive AI Chat: Engage with the Gemini model through an intuitive CLI
- π§ Tool Integration: Built-in tools for file operations, shell commands, web search, and more
- πΎ Persistent Memory: Conversation history and context management across sessions
- π¨ Customizable Themes: Multiple color themes and UI customization options
- π Sandbox Execution: Safe execution environment for code and shell commands
- π File Discovery: Intelligent file inclusion with pattern matching and gitignore support
- π Session Management: Save, resume, and manage conversation sessions
- β‘ Performance Tracking: Built-in statistics and performance monitoring
- π§© Extension System: Modular architecture with extension support
- π Multi-modal Support: Text, image, audio, and video processing capabilities
- Python: 3.8 or higher
- Operating System: Windows, macOS, or Linux
- Memory: At least 4GB RAM (8GB+ recommended for larger models)
- Storage: 2GB+ free space for model weights and cache
- Docker: For sandbox execution (recommended)
- Git: For version control integration
- CUDA: For GPU acceleration (if available)
# Install from PyPI (coming soon)
pip install gemini-cli
# Or install with all features
pip install gemini-cli[all]# Clone the repository
git clone https://github.com/kyegomez/Gemini.git
cd Gemini
# Create a virtual environment (recommended)
python -m venv gemini-env
source gemini-env/bin/activate # On Windows: gemini-env\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install the package
pip install -e .# Clone and install for development
git clone https://github.com/kyegomez/Gemini.git
cd Gemini
# Create virtual environment
python -m venv gemini-dev
source gemini-dev/bin/activate
# Install development dependencies
pip install -r requirements-dev.txt
# Install in development mode
pip install -e .[dev]For a one-line installation:
curl -sSL https://raw.githubusercontent.com/kyegomez/Gemini/main/install.sh | bash# Initialize configuration
gemini-cli --init
# Or start with interactive setup
gemini-cliGemini CLI looks for configuration files in the following order:
./gemini/settings.json(project-specific)~/.gemini/settings.json(user-specific)/etc/gemini-cli/settings.json(system-wide)
Create ~/.gemini/settings.json:
{
"model": "gemini-torch",
"maxTokens": 4096,
"temperature": 0.7,
"theme": "Default",
"autoAccept": false,
"sandbox": {
"enabled": true,
"type": "docker"
},
"memory": {
"enabled": true,
"maxMemoryFiles": 50
}
}# Model configuration
export GEMINI_MODEL="gemini-torch"
export GEMINI_MAX_TOKENS=4096
export GEMINI_TEMPERATURE=0.7
# Feature toggles
export GEMINI_AUTO_ACCEPT=false
export GEMINI_SANDBOX=true
export GEMINI_DEBUG=false
# Paths
export GEMINI_CONFIG_PATH="~/.gemini/settings.json"# Start interactive mode
gemini-cli
# Single prompt mode
gemini-cli "Hello, how can you help me today?"
# Include files in context
gemini-cli "@README.md Explain this project"
# Execute with specific model
gemini-cli --model gemini-large "Analyze this code: @src/main.py"Once in interactive mode, you can use these commands:
/help- Show available commands/quit- Exit the CLI/clear- Clear the screen/stats- Show session statistics/theme [name]- Change color theme/tools- List available tools/memory show- Display memory context/chat save [name]- Save current conversation/restore [id]- Restore from checkpoint
@file.py- Include file content in prompt@src/*.py- Include multiple files with glob patterns@docs/- Include all files in directory
!ls -la- Execute shell command!- Toggle shell mode
# Use custom config file
gemini-cli --config my-config.json
# Override specific settings
gemini-cli --temperature 0.9 --max-tokens 8192
# Enable debug mode
gemini-cli --debug --verbose# Enable sandbox for safe execution
gemini-cli --sandbox
# Use specific sandbox image
gemini-cli --sandbox --config '{"sandbox": {"image": "custom-image"}}'# Disable memory loading
gemini-cli --no-memory
# Add persistent memory
gemini-cli
> /memory add "I prefer Python over JavaScript"Gemini CLI includes a rich set of built-in tools:
read_file- Read file contentswrite_file- Create or modify fileslist_directory- Browse directoriesglob- Find files with patternssearch_file_content- Search text in files
run_shell_command- Execute shell commandsrun_python_code- Execute Python coderun_javascript_code- Execute JavaScript code
web_fetch- Fetch content from URLsgoogle_web_search- Search the web
save_memory- Save information to memorycompress_conversation- Compress chat history
# In interactive mode
> Can you read the contents of package.json?
# Gemini will use the read_file tool automatically
> Please create a Python script that prints "Hello World"
# Gemini will use write_file to create the script
> Search for TODO comments in my Python files
# Gemini will use search_file_content toolAvailable themes:
Default- Standard color schemeDark- Dark mode optimizedLight- Light mode optimizedMonokai- Developer-friendlyGitHub- GitHub-inspired
# Change theme
gemini-cli --theme Dark
# List available themes
gemini-cli --list-themesCreate ~/.gemini/themes/mytheme.json:
{
"name": "MyTheme",
"description": "My custom theme",
"colors": {
"primary": "BLUE",
"secondary": "CYAN",
"success": "GREEN",
"warning": "YELLOW",
"error": "RED",
"info": "BLUE",
"prompt": "GREEN",
"user_input": "WHITE",
"ai_response": "CYAN",
"tool_call": "MAGENTA",
"system": "DIM",
"accent": "BRIGHT_BLUE"
}
}Create custom extensions in ~/.gemini/extensions/:
# ~/.gemini/extensions/my-extension/extension.py
from gemini_cli.core.extensions import BaseExtension
class MyExtension(BaseExtension):
def __init__(self, name: str, version: str = "1.0.0"):
super().__init__(name, version)
self.description = "My custom extension"
async def initialize(self, cli_context):
return True
async def shutdown(self):
return True
def get_commands(self):
return {
"hello": self._hello_command
}
async def _hello_command(self, args):
return f"Hello from {self.name}!"# View current session stats
> /stats
# Export session data
gemini-cli --export-stats session.json# Enable verbose logging
gemini-cli --verbose --log-level DEBUG
# Monitor memory usage
gemini-cli --debug
> /stats memory# Run system health check
gemini-cli --health-check
# Test tool functionality
gemini-cli --test-tools{
"sandbox": {
"enabled": true,
"type": "docker",
"image": "gemini-cli-sandbox",
"memoryLimit": "512m",
"cpuLimit": "0.5",
"allowNetwork": false,
"readOnlyFilesystem": true
}
}# Enable automatic sandboxing
gemini-cli --sandbox
# Review commands before execution
gemini-cli --no-auto-acceptIssue: Numpy installation fails with setup.py error
# Solution: Update pip and use wheel installations
pip install --upgrade pip setuptools wheel
pip install numpy --no-use-pep517Issue: PyTorch CPU installation problems
# Solution: Install PyTorch separately first
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install gemini-cliIssue: ModuleNotFoundError for gemini_torch
# Solution: Install in development mode
pip install -e .Issue: Permission denied for sandbox
# Solution: Add user to docker group
sudo usermod -aG docker $USER
# Then log out and back inIssue: Config file not found
# Solution: Initialize configuration
gemini-cli --init
# Or create manually
mkdir -p ~/.gemini
echo '{}' > ~/.gemini/settings.json# Enable detailed logging
gemini-cli --debug --verbose
# Check configuration
gemini-cli --config-info
# Validate installation
gemini-cli --health-check- Documentation: Check the GitHub Wiki
- Issues: Report bugs on GitHub Issues
- Discussions: Join GitHub Discussions
- Discord: Join our Discord community
# Clone repository
git clone https://github.com/kyegomez/Gemini.git
cd Gemini
# Create development environment
python -m venv gemini-dev
source gemini-dev/bin/activate
# Install development dependencies
pip install -r requirements-dev.txt
pip install -e .[dev]
# Install pre-commit hooks
pre-commit install# Run all tests
pytest
# Run with coverage
pytest --cov=gemini_cli
# Run specific test file
pytest tests/test_cli.py
# Run in parallel
pytest -n auto# Format code
black gemini_cli/
isort gemini_cli/
# Lint code
flake8 gemini_cli/
pylint gemini_cli/
# Type checking
mypy gemini_cli/# Install docs dependencies
pip install -e .[docs]
# Build documentation
cd docs
make html
# Serve locally
python -m http.server 8000 -d _build/htmlWe welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Ensure tests pass:
pytest - Format code:
black . && isort . - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built on the Gemini PyTorch implementation
- Inspired by the original Gemini by Google DeepMind
- Uses Rich for beautiful terminal output
- Powered by PyTorch for model inference
- Plugin marketplace
- Voice input/output support
- Enhanced multimodal capabilities
- Cloud deployment options
- Mobile companion app
- Advanced debugging tools
- Performance optimizations
- Multi-language support
β Star this repository if you find it helpful!
For more information, visit our GitHub repository or check out the documentation.