Thanks to visit codestin.com
Credit goes to github.com

Skip to content

pilardi/git-llm-utils

Repository files navigation

git-llm-utils

git-llm-utils enables automatically generating meaningful commit messages based on your staged changes using a LLM.

What it does

  • Allows generating git commit messages from the repository staged changes.
  • The commnit message can be generated with:
    • the standard git message workflow with a Git hook to capture your staged changes diff (the hook can be installed with a command).
    • with a commit command: git-llm-utils commit
  • The diff message is generated with the configured llm (groq, ollama or any Litellm supported model).
  • All configurations are stored using git config storage support.
  • Helps you maintain clean, consistent, human-readable commit history without manual effort.

Why use it

  • 🕒 Save time — no more writing commit messages for every change.
  • Consistency — a unified style and format across commits.
  • 📚 Clarity — automated summaries help communicate what changed and why.
  • 🤝 Lower friction for contributors — beneficial in collaborative or open-source projects, or when onboarding new team members.

Getting started

Building from the source:

  1. Clone the repo:
git clone [email protected]:pilardi/git-llm-utils.git
  1. Build the environment (you'll need uv package manager installed in your system)
cd git-llm-utils && make install
  1. Activate the environment and build the binary in your system
source .venv/bin/activate && make dist/git-llm-utils
  1. Ensure your environment (or config) provides credentials to access a suitable LLM (see Configuration)
git-llm-utils verify
  1. Go to a repository where you want to use the tool and stage some changes
cd _your_repository_path_here_ && git-llm-utils status
  1. Commit the changes with the generated message: (all actions need to be confirmed by the user unless you pass the --no-confirm option before the action)
git-llm-utils commit

Using the binary

You can download the binary version correspoding to your OS from the release assets or build it locally using:

make dist/git-llm-utils

You'd only need to add it your local PATH afterwards.

Usage

See:

git-llm-utils --help

Configuration:

See:

git-llm-utils --config

by default the application uses ollama/qwen3-coder:480b-cloud as the model, therefore you'd need to have ollama running in your machine or configure a host where ollama would be running (using api_url).

Settings

All configuration settings are made using git config settings:

  • emojis: True will allow the llm to generate Emojis in the commit message.
  • comments: True will generate the message with commented lines.
  • model: ollama/qwen3-coder:480b-cloud use Litellm syntaxis.
  • reasoning: medium use Litellm Reasoning.
  • max_input_tokens: how many tokens to send to the model at most (262144).
  • max_output_tokens: how many tokens to request form the model at most (32768).
  • api_key: None if not given, will read env variables according to Litellm api provider settings (ie: OPENAI_API_KEY).
  • api_url: None (will use Litellm defaults, for ollama it's http://127.0.0.1:11434).
  • description-file: README.md will use this file for context to the llm as as tool (if allowed).
  • tools: False will allow the LLM to access the description of your repository.
  • manual: True if True will not generate the commit message for every git commit when the commit hook is installed, unless the env.GIT_LLM_ON=True is set. Manual mode is active by default as you might want to select which commits needs llm comments.

Use:

git-llm-utils set-config setting --value value

to change the setting, where setting is any of the available settings.

Models

By default the system uses qwen3-coder:480b-cloud, however you can try other models such as nemotron-3-nano:30b-cloud. ie:

ollama pull nemotron-3-nano:30b-cloud && git-llm-utils --model ollama/nemotron-3-nano:30b-cloud commit

Local models are also supported however the quality of the message will drasticall vary based on the size of it (even worse for non-thinking models), ie:

ollama pull deepseek-coder:6.7b && git-llm-utils --model ollama/deepseek-coder:6.7b --reasoning none commit

References:

  • Litellm is uses as the llm library to connect to your preferred LLM model server.
  • Typer typer is the cli client library used to wire the application.
  • Ollama is the default model server, however any litellm model api would work such as openai, groq or gemini.
  • Git is the underlining supported repository, you'd need to have git installed in your system to use this tool.

Fun fact, all commit messages in this repo were generated by this tool


🤝 Contributing

Contributions are welcome! You can help by:

  • Improving documentation
  • Adding new hooks
  • Improving prompt quality
  • Reporting bugs or edge cases

📄 License

MIT License. See LICENSE for details.


✨ Author

Created by Pablo Ilardi GitHub: https://github.com/pilardi

About

Git hooks for interacting with LLMs

Resources

License

Stars

Watchers

Forks

Packages

No packages published