_
__ _ ___| | __
/ _` / __| |/ /
| (_| \__ \ <
\__,_|___/_|\_\
Natural language to shell commands, powered by local AI.
A CLI tool that translates what you mean into what to type, using Ollama models running entirely on your machine.
- Runs 100% locally — No API keys, no cloud, no data leaves your machine
- Project-aware — Detects Go, Node, Python, Rust, and tailors commands accordingly
- Explain mode — Don't know what a command does? Ask
?tar -czf - Safety warnings — Flags dangerous commands like
rm -rfbefore execution - Interactive REPL — Conversational shell with command history context
- Usage statistics — Track your usage with
--stats
curl -fsSL https://raw.githubusercontent.com/ykushch/ask/main/install.sh | bashThis will:
- Install Ollama if not already present
- Pull the default model (
qwen2.5-coder:7b) - Install the
askbinary to~/.local/bin
curl -fsSL https://raw.githubusercontent.com/ykushch/ask/main/uninstall.sh | bashask find all markdown files in this directory
# → find . -name "*.md" [Enter to run]
ask show disk usage sorted by size
# → du -sh * | sort -h [Enter to run]
ask kill the process on port 3000
# → lsof -t -i:3000 | xargs kill [Enter to run]askDrops into a REPL where you can type queries continuously:
projects > list go files
→ find . -name "*.go" [Enter to run]
projects > compress the src folder
→ tar -czf src.tar.gz src [Enter to run]
Interactive commands:
!help— show available commands!model NAME— switch model!model— show current model!explain CMD— explain a shell command?CMD— explain a shell command (shorthand)?— explain the last executed command!cmd— runcmddirectly (bypass AI)Ctrl+D— exit
Don't know what a command does? Ask for an explanation:
ask --explain "find . -name '*.go' -exec grep 'func main' {} +"
# Finds all .go files in the current directory tree and searches each one for lines containing "func main".
# -name '*.go': match files ending in .go
# -exec grep 'func main' {} +: run grep on the matched filesAlso works in interactive mode with the ? prefix:
projects > ?tar -czf src.tar.gz src
# Compresses the src directory into a gzipped tar archive named src.tar.gz.
# -c: create a new archive
# -z: compress with gzip
# -f src.tar.gz: name the output file
Commands are tailored to your project type. ask detects signature files in the current directory:
| File | Detected As | Example |
|---|---|---|
go.mod |
Go | ask run tests → go test ./... |
package.json |
Node.js | ask run tests → npm test |
Cargo.toml |
Rust | ask build → cargo build |
requirements.txt |
Python | ask run app → python app.py |
Makefile |
Make-based | ask build → make |
Dockerfile |
Docker | Docker-aware suggestions |
Dangerous commands are flagged with a warning before execution:
⚠ Warning: Recursive deletion targeting a broad path
→ rm -rf ~/Documents [Enter to run]
Detected patterns include rm -rf, dd, mkfs, chmod 777, git push --force, DROP TABLE, and more. Warnings are informational — you can still press Enter to proceed.
# Via flag
ask --model llama3 show my public ip
# Via environment variable
export ASK_MODEL=deepseek-r1
ask list running docker containersask --updateSelf-updates the binary to the latest GitHub release. A background version check also runs on every invocation — if a newer version is available, you'll see a notice after the command completes.
ask -v
# ask version 0.1.0
# model: qwen2.5-coder:7b
# ollama: http://localhost:11434Track how you use ask over time:
ask --stats
# ask usage statistics
# ────────────────────
# Total invocations: 150
# Commands generated: 120
# Commands executed: 95 (79%)
# Explain calls: 15
# Interactive sessions: 20
# One-shot commands: 100
#
# Model usage:
# qwen2.5-coder:7b 140 (93%)
# llama3 10 (7%)
#
# Stats file: ~/.ask/stats.json (12KB)
# Tracking since: 2026-01-29Statistics are stored locally in ~/.ask/stats.json.
The default model is qwen2.5-coder:7b — a good balance of speed and accuracy for shell command generation. Depending on your hardware and needs, you may want to try other models:
| Model | Size | Best For | Pull Command |
|---|---|---|---|
qwen2.5-coder:7b |
4.7 GB | General use (default) | ollama pull qwen2.5-coder:7b |
deepseek-coder:6.7b |
3.8 GB | Code-focused, lighter | ollama pull deepseek-coder:6.7b |
nemotron-mini |
2.7 GB | Lightweight, low-resource machines | ollama pull nemotron-mini |
nemotron-3-nano |
24 GB | Reasoning-heavy queries, 1M context | ollama pull nemotron-3-nano |
deepseek-r1 |
4.7 GB | Reasoning tasks | ollama pull deepseek-r1 |
Tips:
- For most users, the default
qwen2.5-coder:7bworks well - On machines with limited RAM (<8 GB), try
nemotron-mini(2.7 GB) ordeepseek-coder:6.7b(3.8 GB) - For complex commands requiring step-by-step reasoning,
nemotron-3-nanoordeepseek-r1may produce better results (requires 32 GB+ RAM)
Switch models with --model or set ASK_MODEL:
ask --model nemotron-mini show disk usage
# or
export ASK_MODEL=nemotron-mini| Variable | Description | Default |
|---|---|---|
ASK_MODEL |
Ollama model to use | qwen2.5-coder:7b |
OLLAMA_HOST |
Ollama server URL | http://localhost:11434 |
- macOS or Linux
- Ollama (installed automatically by the install script)
git clone https://github.com/ykushch/ask.git
cd ask
go build -o ask .# Set up git hooks (runs tests on commit)
make setup
# Run tests
make test