Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: simonw/llm

0.27.1

12 Aug 05:15

Choose a tag to compare

  • llm chat -t template now correctly loads any tools that are included in that template. #1239
  • Fixed a bug where llm -m gpt5 -o reasoning_effort minimal --save gm saved a template containing invalid YAML. #1237
  • Fixed a bug where running llm chat -t template could cause prompts to be duplicated. #1240
  • Less confusing error message if a requested toolbox class is unavailable. #1238

0.27

11 Aug 21:31

Choose a tag to compare

This release adds support for the new GPT-5 family of models from OpenAI. It also enhances tool calling in a number of ways, including allowing templates to bundle pre-configured tools.

New features

  • New models: gpt-5, gpt-5-mini and gpt-5-nano. #1229
  • LLM templates can now include a list of tools. These can be named tools from plugins or arbitrary Python function blocks, see Tools in templates. #1009
  • Tools can now return attachments, for models that support features such as image input. #1014
  • New methods on the Toolbox class: .add_tool(), .prepare() and .prepare_async(), described in Dynamic toolboxes. #1111
  • New model.conversation(before_call=x, after_call=y) attributes for registering callback functions to run before and after tool calls. See tool debugging hooks for details. #1088
  • Some model providers can serve different models from the same configured URL - llm-llama-server for example. Plugins for these providers can now record the resolved model ID of the model that was used to the LLM logs using the response.set_resolved_model(model_id) method. #1117
  • Raising llm.CancelToolCall now only cancels the current tool call, passing an error back to the model and allowing it to continue. #1148
  • New -l/--latest option for llm logs -q searchterm for searching logs ordered by date (most recent first) instead of the default relevance search. #1177

Bug fixes and documentation

  • The register_embedding_models hook is now documented. #1049
  • Show visible stack trace for llm templates show invalid-template-name. #1053
  • Handle invalid tool names more gracefully in llm chat. #1104
  • Add a Tool plugins section to the plugin directory. #1110
  • Error on register(Klass) if the passed class is not a subclass of Toolbox. #1114
  • Add -h for --help for all llm CLI commands. #1134
  • Add missing dataclasses to advanced model plugins docs. #1137
  • Fixed a bug where llm logs -T llm_version "version" --async incorrectly recorded just one single log entry when it should have recorded two. #1150
  • All extra OpenAI model keys in extra-openai-models.yaml are now documented. #1228

0.26

27 May 20:34

Choose a tag to compare

Tool support is finally here! This release adds support exposing tools to LLMs, previously described in the release notes for 0.26a0 and 0.26a1.

Read Large Language Models can run tools in your terminal with LLM 0.26 for a detailed overview of the new features.

Also in this release:

0.26a1

26 May 06:11

Choose a tag to compare

0.26a1 Pre-release
Pre-release

Hopefully the last alpha before a stable release that includes tool support.

Features

  • Plugin-provided tools can now be grouped into "Toolboxes".
    • Toolboxes (llm.Toolbox classes) allow plugins to expose multiple related tools that share state or configuration, enhancing modularity and reusability (e.g., a Memory tool or Filesystem tool). (#1059, #1086)
  • Tool support for llm chat.
    • The llm chat command now accepts --tool and --functions arguments, allowing interactive chat sessions to use tools. (#1004, #1062)
  • Tools can now execute asynchronously.
    • Models that implement AsyncModel can now run tools, including tool functions defined as async def. This enables non-blocking tool calls for potentially long-running operations. (#1063)
  • llm chat now supports adding fragments during a session.
    • Use the new !fragment <id> command while chatting to insert content from a fragment. Initial fragments can also be passed to llm chat using -f or --sf. Thanks, Dan Turkel. (#1044, #1048)
  • Filter llm logs by tools.
    • New --tool <name> option to filter logs to show only responses that involved a specific tool (e.g., --tool simple_eval).
    • The --tools flag shows all responses that used any tool. (#1013, #1072)
  • llm schemas list can output JSON.
    • Added --json and --nl (newline-delimited JSON) options to llm schemas list for programmatic access to saved schema definitions. (#1070)
  • Filter llm similar results by ID prefix.
    • The new --prefix option for llm similar allows searching for similar items only within IDs that start with a specified string (e.g., llm similar my-collection --prefix 'docs/'). Thanks, Dan Turkel. (#1052)
  • Control chained tool execution limit.
    • New --chain-limit <N> (or --cl) option for llm prompt and llm chat to specify the maximum number of consecutive tool calls allowed for a single prompt. Defaults to 5; set to 0 for unlimited. (#1025)
  • llm plugins --hook <NAME> option.
    • Filter the list of installed plugins to only show those that implement a specific plugin hook. (#1047)
  • llm tools list now shows toolboxes and their methods. (#1013)
  • llm prompt and llm chat now automatically re-enable plugin-provided tools when continuing a conversation (-c or --cid). (#1020)
  • The --tools-debug option now pretty-prints JSON tool results for improved readability. (#1083)
  • New LLM_TOOLS_DEBUG environment variable to permanently enable --tools-debug. (#1045)
  • llm chat sessions now correctly respect default model options configured with llm models set-options. Thanks, AndrΓ© Arko. (#985)
  • New --pre option for llm install to allow installing pre-release packages. (#1060)
  • OpenAI models (gpt-4o, gpt-4o-mini) now explicitly declare support for tools and vision. (#1037)
  • The supports_tools parameter is now supported in extra-openai-models.yaml. Thanks, Mahesh Hegde . (#1068)

0.26a0

14 May 00:39

Choose a tag to compare

0.26a0 Pre-release
Pre-release

This is the first alpha to introduce support for tools! Models with tool capability (which includes the default OpenAI model family) can now be granted access to execute Python functions as part of responding to a prompt.

Tools are supported by the command-line interface:

llm  --functions '
def multiply(x: int, y: int) -> int:
 """Multiply two numbers."""
 return x * y
' 'what is 34234 * 213345'

And in the Python API, using a new model.chain() method for executing multiple prompts in a sequence:

import llm

def multiply(x: int, y: int) -> int:
  """Multiply two numbers."""
    return x * y

model = llm.get_model("gpt-4.1-mini")
response = model.chain(
    "What is 34234 * 213345?",
    tools=[multiply]
)
print(response.text())

New tools can also be defined using the register_tools() plugin hook. They can then be called by name from the command-line like this:

llm  -T  multiply  'What is 34234 * 213345?'

Tool support is currently under active development. Consult this milestone for the latest status.

0.25

05 May 03:30

Choose a tag to compare

  • New plugin feature: register_fragment_loaders(register) plugins can now return a mixture of fragments and attachments. The llm-video-frames plugin is the first to take advantage of this mechanism. #972
  • New OpenAI models: gpt-4.1, gpt-4.1-mini, gpt-41-nano, o3, o4-mini. #945, #965, #976.
  • New environment variables: LLM_MODEL and LLM_EMBEDDING_MODEL for setting the model to use without needing to specify -m model_id every time. #932
  • New command: llm fragments loaders, to list all currently available fragment loader prefixes provided by plugins. #941
  • llm fragments command now shows fragments ordered by the date they were first used. #973
  • llm chat now includes a !edit command for editing a prompt using your default terminal text editor. Thanks, Benedikt Willi. #969
  • Allow -t and --system to be used at the same time. #916
  • Fixed a bug where accessing a model via its alias would fail to respect any default options set for that model. #968
  • Improved documentation for extra-openai-models.yaml. Thanks, Rahim Nathwani and Dan Guido. #950, #957
  • llm -c/--continue now works correctly with the -d/--database option. llm chat now accepts that -d/--database option. Thanks, Sukhbinder Singh. #933

0.25a0

11 Apr 00:28

Choose a tag to compare

0.25a0 Pre-release
Pre-release
  • llm models --options now shows keys and environment variables for models that use API keys. Thanks, Steve Morin. #903
  • Added py.typed marker file so LLM can now be used as a dependency in projects that use mypy without a warning. #887
  • $ characters can now be used in templates by escaping them as $$. Thanks, @guspix. #904
  • LLM now uses pyproject.toml instead of setup.py. #908

0.24.2

09 Apr 03:01

Choose a tag to compare

  • Fixed a bug on Windows with the new llm -t path/to/file.yaml feature. #901

0.24.1

08 Apr 20:41

Choose a tag to compare

  • Templates can now be specified as a path to a file on disk, using llm -t path/to/file.yaml. This makes them consistent with how -f fragments are loaded. #897
  • llm logs backup /tmp/backup.db command for backing up your logs.db database. #879

0.24

07 Apr 15:40

Choose a tag to compare

Support for fragments to help assemble prompts for long context models. Improved support for templates to support attachments and fragments. New plugin hooks for providing custom loaders for both templates and fragments. See Long context support in LLM 0.24 using fragments and template plugins for more on this release.

The new llm-docs plugin demonstrates these new features. Install it like this:

llm install llm-docs

Now you can ask questions of the LLM documentation like this:

llm -f docs: 'How do I save a new template?'

The docs: prefix is registered by the plugin. The plugin fetches the LLM documentation for your installed version (from the docs-for-llms repository) and uses that as a prompt fragment to help answer your question.

Two more new plugins are llm-templates-github and llm-templates-fabric.

llm-templates-github lets you share and use templates on GitHub. You can run my Pelican riding a bicycle benchmark against a model like this:

llm install llm-templates-github
llm -t gh:simonw/pelican-svg -m o3-mini

This executes this pelican-svg.yaml template stored in my simonw/llm-templates repository, using a new repository naming convention.

To share your own templates, create a repository on GitHub under your user account called llm-templates and start saving .yaml files to it.

llm-templates-fabric provides a similar mechanism for loading templates from Daniel Miessler's fabric collection:

llm install llm-templates-fabric
curl https://simonwillison.net/2025/Apr/6/only-miffy/ | \
  llm -t f:extract_main_idea

Major new features:

  • New fragments feature. Fragments can be used to assemble long prompts from multiple existing pieces - URLs, file paths or previously used fragments. These will be stored de-duplicated in the database avoiding wasting space storing multiple long context pieces. Example usage: llm -f https://llm.datasette.io/robots.txt 'explain this file'. #617
  • The llm logs file now accepts -f fragment references too, and will show just logged prompts that used those fragments.
  • register_template_loaders() plugin hook allowing plugins to register new prefix:value custom template loaders. #809
  • register_fragment_loaders() plugin hook allowing plugins to register new prefix:value custom fragment loaders. #886
  • llm fragments family of commands for browsing fragments that have been previously logged to the database.
  • The new llm-openai plugin provides support for o1-pro (which is not supported by the OpenAI mechanism used by LLM core). Future OpenAI features will migrate to this plugin instead of LLM core itself.

Improvements to templates:

  • llm -t $URL option can now take a URL to a YAML template. #856
  • Templates can now store default model options. #845
  • Executing a template that does not use the $input variable no longer blocks LLM waiting for input, so prompt templates can now be used to try different models using llm -t pelican-svg -m model_id. #835
  • llm templates command no longer crashes if one of the listed template files contains invalid YAML. #880
  • Attachments can now be stored in templates. #826

Other changes:

  • New llm models options family of commands for setting default options for particular models. #829
  • llm logs list, llm schemas list and llm schemas show all now take a -d/--database option with an optional path to a SQLite database. They used to take -p/--path but that was inconsistent with other commands. -p/--path still works but is excluded from --help and will be removed in a future LLM release. #857
  • llm logs -e/--expand option for expanding fragments. #881
  • llm prompt -d path-to-sqlite.db option can now be used to write logs to a custom SQLite database. #858
  • llm similar -p/--plain option providing more human-readable output than the default JSON. #853
  • llm logs -s/--short now truncates to include the end of the prompt too. Thanks, Sukhbinder Singh. #759
  • Set the LLM_RAISE_ERRORS=1 environment variable to raise errors during prompts rather than suppressing them, which means you can run python -i -m llm 'prompt' and then drop into a debugger on errors with import pdb; pdb.pm(). #817
  • Improved --help output for llm embed-multi. #824
  • llm models -m X option which can be passed multiple times with model IDs to see the details of just those models. #825
  • OpenAI models now accept PDF attachments. #834
  • llm prompt -q gpt -q 4o option - pass -q searchterm one or more times to execute a prompt against the first model that matches all of those strings - useful for if you can't remember the full model ID. #841
  • OpenAI compatible models configured using extra-openai-models.yaml now support supports_schema: true, vision: true and audio: true options. Thanks @adaitche and @giuli007. #819, #843