You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update README and documentation to clarify model selection and expert delegation. Enhance model fetching functions to support filtering by parameters, ensuring only tool-capable models are returned. Adjust limitations documentation to reflect changes in expert model routing and improve clarity on known issues. Update local models documentation to emphasize the impact of model selection on tool-calling quality. Modify tests to validate new model handling and ensure accurate representation in the frontend model list.
Copy file name to clipboardExpand all lines: README.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ It is built for scientists, analysts, and curious people who want a powerful AI
16
16
17
17
-**Answer questions and take on tasks.** Chat with Kady like any AI assistant. For bigger work, Kady delegates to a specialist "expert" agent that runs with a full Python environment and scientific tools.
18
18
-**Run up to 10 chats in parallel.** Open a new tab for each thread of work — every tab keeps its own message history, model, attached files, and cost meter, but all tabs share the project's sandbox so files written in one tab are immediately available in the others. Tabs keep streaming in the background while you switch between them.
19
-
-**Pick any AI model, any time.** Choose from 30+ models across 10 providers (OpenAI, Anthropic, Google, xAI, Qwen, and more) with a simple dropdown. Switch models message to message. You can also use free local models through [Ollama](./docs/local-models-ollama.md).
19
+
-**Pick any tool-capable AI model, any time.** Choose from the full set of OpenRouter models that support tool calling (OpenAI, Anthropic, Google, xAI, Qwen, and more) with a simple dropdown. Switch the orchestrator and expert models per chat tab. You can also use free local models through [Ollama](./docs/local-models-ollama.md).
20
20
-**170+ scientific skills, pre-installed.** Covers genomics, proteomics, drug discovery, materials science, and more. Kady passes the right skills to the expert automatically for each task.
21
21
-**326 ready-to-run workflow templates.** Browse a built-in library across 22 disciplines - genomics, drug discovery, finance, astrophysics, and more. Pick one, fill in the blanks, and launch.
22
22
-**229 scientific and financial databases.** Connect to databases in 18 categories - Biomedical & Health, Chemistry & Materials, Scholarly Publications, Stock Market, Earth & Climate, Astronomy & Space, and more.
@@ -76,7 +76,7 @@ Press **Ctrl+C** in the terminal.
76
76
77
77
-**Send a message.** Type a question or task and hit enter. Kady will either answer directly or hand off to an expert for bigger work.
78
78
-**Open multiple chats.** Click `+` in the chat tab strip to start a new chat in the same project (up to 10). Double-click a tab title or use the pencil icon to rename it. Closing a tab cancels any turn it had running. The cost pill in the header shows both the active tab's session cost (`sess`) and the project total across every tab (`proj`).
79
-
-**Switch models.** Use the model dropdown in the input bar - any message can use any model. Each tab keeps its own model selection.
79
+
-**Switch models.** Use the model dropdown in the input bar - any message can use any supported model. Each tab keeps its own orchestrator and expert model selections.
80
80
-**Upload files.** Drag files into the file browser or directly onto the input bar. Use `@filename` in your message to reference files.
81
81
-**Launch a workflow.** Open the workflows panel, pick one, fill in the blanks, and click Launch. Workflows run in whichever chat tab is currently active.
82
82
-**Open Settings** (the gear icon in the top-right) for API keys, MCP servers, browser automation, and appearance.
@@ -87,6 +87,7 @@ Press **Ctrl+C** in the terminal.
87
87
These guides live in the [`docs/`](./docs) folder:
88
88
89
89
-**[Architecture](./docs/architecture.md)** - how the three local services fit together and what each folder in the project is for.
90
+
-**[Model selection](./docs/model-selection.md)** - how Kady builds the OpenRouter model list and routes orchestrator vs expert calls.
90
91
-**[Custom MCP servers](./docs/custom-mcp-servers.md)** - add your own tools to Kady's expert agents.
91
92
-**[Browser automation](./docs/browser-automation.md)** - let Kady drive a real browser.
92
93
-**[Local models with Ollama](./docs/local-models-ollama.md)** - run everything with local models, no API keys required.
Copy file name to clipboardExpand all lines: docs/architecture.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,8 +90,10 @@ k-dense-byok/
90
90
└── sessions.db ← Chat history (SQLite, one session per chat tab)
91
91
```
92
92
93
-
## A note on the expert model
93
+
## Model selection and routing
94
94
95
-
The model you select in Kady's dropdown only applies to Kady (the main agent). When Kady delegates a task, the expert runs through the **Gemini CLI**, which always uses a Gemini model on [OpenRouter](https://openrouter.ai/) regardless of your dropdown choice.
95
+
Kady keeps separate model choices for the orchestrator (the main agent) and the delegated expert in each chat tab. Both OpenRouter-hosted choices are routed through the local LiteLLM proxy, which accepts the `openrouter/*` model ids shown in the picker.
96
96
97
-
The one exception is local Ollama models - if you pick an Ollama model, both Kady and the expert run through your local daemon. See [Local models with Ollama](./local-models-ollama.md).
97
+
The expert still runs inside the **Gemini CLI** process, but K-Dense routes that CLI through the same LiteLLM proxy, so it can target any OpenRouter model in the picker that supports tool calling. The recommended expert default is Gemini 3.1 Pro Preview because it has strong native tool use and a large context window, but users can override it per tab.
98
+
99
+
Local Ollama models are the main exception - if you pick an Ollama model, both Kady and the expert run through your local daemon. See [Local models with Ollama](./local-models-ollama.md) and [Model selection](./model-selection.md).
Copy file name to clipboardExpand all lines: docs/limitations.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,22 @@
1
1
# Known Limitations
2
2
3
-
K-Dense BYOK is in beta. The most important rough edges today are all on the expert-delegation path, which relies on the Gemini CLI and Gemini models.
3
+
K-Dense BYOK is in beta. The most important rough edges today are on the expert-delegation path, which runs through the Gemini CLI even when the selected expert model is routed through OpenRouter or Ollama.
4
4
5
-
## Gemini models and the Gemini CLI with Skills
5
+
## Expert models and the Gemini CLI with Skills
6
6
7
-
The expert delegation system relies on the Gemini CLI, which uses Gemini models to execute tasks with our scientific skills. While this works well for many workflows, there are some rough edges to be aware of:
7
+
The expert delegation system relies on the Gemini CLIto execute tasks with our scientific skills. K-Dense routes that CLI through the local LiteLLM proxy, so the expert can use any model in the OpenRouter picker that supports tool calling. Gemini 3.1 Pro Preview remains the recommended expert default because it tends to be strongest for tool-heavy work, but other supported models can be selected per chat tab. While this works well for many workflows, there are some rough edges to be aware of:
8
8
9
-
-**Skill activation is not always reliable.**Gemini models sometimes skip a relevant skill, use it partially, or misinterpret the skill's instructions. This is especially noticeable with complex multi-step skills that require strict adherence to a procedure.
9
+
-**Skill activation is not always reliable.**Models sometimes skip a relevant skill, use it partially, or misinterpret the skill's instructions. This is especially noticeable with complex multi-step skills that require strict adherence to a procedure.
10
10
-**Tool-calling consistency varies.** The Gemini CLI occasionally drops tool calls mid-execution or calls tools with incorrect arguments, which can cause expert tasks to stall or produce incomplete results.
11
-
-**Long-context degradation.** When a skill injects a large amount of context (detailed protocols, multiple reference databases), Gemini models may lose track of earlier instructions or produce less focused output.
12
-
-**Structured output can drift.** For skills that require specific output formats (tables, JSON, citations), Gemini models sometimes deviate from the requested structure.
11
+
-**Long-context degradation.** When a skill injects a large amount of context (detailed protocols, multiple reference databases), models may lose track of earlier instructions or produce less focused output.
12
+
-**Structured output can drift.** For skills that require specific output formats (tables, JSON, citations), models sometimes deviate from the requested structure.
13
13
14
-
These are upstream limitations of the Gemini model family and the Gemini CLI tooling, not bugs in K-Dense BYOK itself. Google is actively improving both, and we see meaningful progress with every new model release and CLI update. As these improve, the expert delegation experience will get better automatically without any changes on your end.
14
+
These are upstream limitations of the selected model and the Gemini CLI tooling, not bugs in K-Dense BYOK itself. As model tool calling and CLI support improve, the expert delegation experience will get better automatically without any changes on your end.
15
15
16
16
**Workarounds:**
17
17
18
18
- If a skill isn't behaving as expected, try **re-running the task** - results can vary between runs.
19
-
-You can switch Kady's main model (via the dropdown) to a non-Gemini model for the orchestration layer while experts continue to use Gemini under the hood.
19
+
-Try a different expert model in the dropdown. The model list is limited to OpenRouter models that advertise `tools` support, but tool-calling quality still varies across providers.
Copy file name to clipboardExpand all lines: docs/local-models-ollama.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,13 +21,13 @@ You can run Kady and the expert agent entirely against local models served by [O
21
21
22
22
3.**(Optional) Custom Ollama host.** If your Ollama server lives somewhere other than `http://localhost:11434`, set `OLLAMA_BASE_URL` in `kady_agent/.env`.
23
23
24
-
4.**Pick the model in the app.** Open the model dropdown in the chat input. Pulled models appear under the **Local (Ollama)** section at the bottom. Picking one routes both Kady and the Gemini-CLI-backed expert through your local daemon.
24
+
4.**Pick the model in the app.** Open the model dropdown in the chat input. Pulled models appear under the **Local (Ollama)** section at the bottom. Picking one routes Kady, and optionally the Gemini-CLI-backed expert, through your local daemon.
25
25
26
26
The list is populated live from Ollama's `GET /api/tags` endpoint, so pulling a new model and re-opening the dropdown is enough - no app restart needed.
27
27
28
28
## Caveats
29
29
30
-
Local models amplify the limitations of the Gemini CLI tooling (see [Known limitations](./limitations.md)):
30
+
Local models amplify the limitations of the Gemini CLI tooling and model tool-calling quality (see [Known limitations](./limitations.md)):
31
31
32
32
-**Tool-calling fidelity is noticeably weaker** on sub-frontier models.
33
33
-**Skills that rely on multi-tool choreography** (browsing, running scripts, producing structured output) are the most fragile.
-**Orchestrator model:** the main Kady agent that reads your message, decides what to do, and streams the response.
6
+
-**Expert model:** the model used by delegated expert tasks that run inside the Gemini CLI process.
7
+
8
+
Both choices are stored per tab, so different chats in the same project can use different orchestrator and expert models.
9
+
10
+
## OpenRouter models
11
+
12
+
The OpenRouter model picker is generated from models that advertise tool-calling support. Kady sends tool definitions to the orchestrator and expert, so models that do not support the `tools` parameter are excluded from the dropdown.
13
+
14
+
The generator calls the OpenRouter SDK with:
15
+
16
+
```python
17
+
client.models.list(supported_parameters="tools")
18
+
```
19
+
20
+
The resulting entries are written to `web/src/data/models.json` with ids prefixed as `openrouter/<provider>/<model>`. The LiteLLM proxy has an `openrouter/*` wildcard route, so both the orchestrator and the Gemini CLI-backed expert can use any generated OpenRouter id.
21
+
22
+
To refresh the checked-in model list:
23
+
24
+
```bash
25
+
uv run python - <<'PY'
26
+
from dotenv import load_dotenv
27
+
load_dotenv("kady_agent/.env")
28
+
29
+
from kady_agent.utils import update_models_json
30
+
update_models_json()
31
+
PY
32
+
```
33
+
34
+
By default, this includes all OpenRouter models with `tools` support, preserves the orchestrator and expert recommended defaults, and omits retired GPT-5.4 base/pro entries.
35
+
36
+
## Defaults
37
+
38
+
- The orchestrator default is `openrouter/anthropic/claude-opus-4.7`.
39
+
- The expert default is `openrouter/google/gemini-3.1-pro-preview`.
40
+
41
+
Gemini 3.1 Pro Preview is recommended for expert tasks because expert delegation is tool-heavy and often benefits from a large context window. You can still choose a different tool-capable OpenRouter model per tab.
42
+
43
+
## Local Ollama models
44
+
45
+
Pulled Ollama models are discovered live from the local Ollama daemon and appear under the **Local (Ollama)** section in the picker. Selecting an Ollama model routes through the local LiteLLM `ollama/*` wildcard instead of OpenRouter.
46
+
47
+
Local models are useful for privacy and cost control, but tool-calling quality varies widely. For complex delegated expert tasks, frontier OpenRouter models are usually more reliable.
0 commit comments