A CLI tool that provides detailed information about AI language models, helping developers choose the right model for their needs.
Note: This project is now using the models.dev Database. Thanks for making it!
List models by popular providers
npx aidex --provider openai
npx aidex --provider anthropic
npx aidex --provider google
List all multimodal models that accept images as input:
npx aidex --input image --provider openaiCompare popular reasoning‑capable chat models:
npx aidex --compare "o3,opus4"Find all models under $1 per million cache reads grouped by provider (note the additional --model filter which is required when using --group-by):
npx aidex --model gpt --group-by provider --sort-by cache_read_cost_per_token| Flag | Description |
|---|---|
--input <mod> |
Require an input modality (text, image, audio, video). Repeat flag for multiple. |
--output <mod> |
Filter by output modality. |
--reasoning |
Show only models flagged as reasoning‑capable. |
--tool-call |
Show only models that support function / tool calling. |
--vision |
Alias for --input image (kept for backwards compatibility). |
--sort-by <field> |
Any numeric field, e.g. input_cost_per_token, cache_read_cost_per_token, max_input_tokens. |
--sort-token |
Sort by maximum input tokens (descending). |
--sort-cost |
Sort by input cost per token (descending). |
--group-by <criteria> |
type, provider, mode, series (requires --model or --provider flag). |
All previous flags (--model, --provider, etc.) still work. |
The --sort-by flag accepts any numeric field returned by the API (after Aidex normalises the schema). The following fields are currently available:
max_input_tokens– Maximum context window size.max_output_tokens– Maximum number of tokens the model can generate.input_cost_per_token– Cost per input token.output_cost_per_token– Cost per output token.cache_read_cost_per_token– Cost per cached-read token.cache_write_cost_per_token– Cost per cached-write token.
Example:
# Show the cheapest GPT-style models first
npx aidex --model gpt --sort-by input_cost_per_tokenThe --group-by option helps organise results into logical sections. It must be combined with either --model or --provider so that the search space is first narrowed before grouping.
Available grouping keys:
provider– AI providers (OpenAI, Anthropic, etc.)type– Model capability buckets (Latest, Vision, etc.)mode– Model mode (Chat, Embedding, Rerank, …)series– Major model series (legacy vs latest, etc.)
Examples:
# Group every GPT-style model by provider and sort by cache-read cost
npx aidex --model gpt --group-by provider --sort-by cache_read_cost_per_token
# Show all OpenAI models grouped by type
npx aidex --provider openai --group-by type
# Combine convenience sort flags
npx aidex --mode chat --sort-cost --group-by mode- X/Twitter: @kregenrek
- Bluesky: @kevinkern.dev
- Learn Cursor AI: Ultimate Cursor Course
- Learn to build software with AI: AI Builder Hub
- codefetch - Turn code into Markdown for LLMs with one simple terminal command
- instructa - Instructa Projects
unjs - for bringing us the best javascript tooling system