This is an experimental project with the primary goal of learning to build tools
for LLMs. Maybe it would become a DIY tool, handy for surviving these interesting times.
This is a command-line interface (CLI) application that interacts with a
local Ollama model (e.g., qwen3:8b, gpt-oss:20b, huihui_ai/jan-nano-abliterated:4b, ...)
to provide an AI chat experience.
It supports streaming responses, tool execution, and conversation history
management via /save and /load commands. The application is built in Python
with a virtual environment, and logging.
- Interactive CLI for natural language conversations
- Model interaction with Ollama
- Tool execution during conversations
- Conversation history management
- Debug mode for verbose logging
- Customizable toolset
Install dependencies via pip install -r requirements.txt:
pip install -r requirements.txtDependencies:
ollama(for model interaction)
-
Clone the repository:
git clone <repository-url> cd <project-folder>
-
Set up the virtual environment:
Linux/macOSpython -m venv .venv source .venv/bin/activateWindows
python -m venv .venv .venv\Scripts\activate #
-
Install dependencies:
pip install -r requirements.txt
-
Ensure Ollama is running:
ollama serve
-
Pull the model (if not already present):
ollama pull qwen3:8b
Run the CLI with optional parameters:
python cli.py --model <model-name> --host <host:port>Replace <model-name> with your desired model (e.g., gpt-oss:20b) and
<host:port> with the Ollama server address (e.g., 192.168.1.42:11434). If not specified, defaults are used:
--model qwen3:8b and --host 127.0.0.1:11434.
/clear: Clear conversation history/exit: Exit the application/help: Show this help message/save <filename>: Save the conversation to a file (e.g.,chat.json)/load <filename>: Load a saved conversation
💬 You: Analyze project in directory /home/pasha/src/ZenStates-Linux Describe it to me.
🤖 Agent: Okay, let's tackle this user's request. They want me to analyze the project in
the directory /home/pasha/src/ZenStates-Linux, read all relevant source code and
configuration files, and describe it to them. The tools available are
list_directory, read_file, and write_file.
...
🔧 Executing tools...
Tool list_directory result: Contents of directory '/home/pasha/src/ZenStates-Linux'
...
🤖 Agent: The directory contains several files: .gitignore, LICENSE, README.md,
disable_c6, togglecode.py, zenstates.py, zenstates.service, and a .git folder.
...
🔧 Executing tools...
Tool read_file result: Contents of file '/home/pasha/src/ZenStates-Linux/README.md'
...
🤖 Agent: The **ZenStates-Linux** project is a collection of tools for advanced power state
management on Ryzen processors and compatible motherboards. Here's the analysis of its
components:
### 🔧 **Key Tools & Functionality**
#### 1. **`zenstates.py` (P-State Control)**
- **Purpose**: Dynamically adjust Ryzen processor P-States (performance states) to balance
power efficiency and performance.
- **Requirements**:
- Root access.
- `msr` kernel module loaded (`modprobe msr`).
- **Features**:
- List all P-States (`-l`).
- Set specific P-State (`-p`).
- Enable/disable P-State (`--enable`, `--disable`).
- Adjust FID/DID/VID (microarchitecture parameters via MSR).
- Enable/disable C6 state (`--c6-enable`, `--c6-disable`).
...
Extend functionality by editing tools.py:
- Add new tools (e.g.,
search_web,calculate) - Define
execute_tool()logic for each function
Enable debug mode by setting the DEBUG environment variable:
Output would be saved to app.log
DEBUG=1 python cli.pyThis project is licensed under the MIT License. See the LICENSE file for details.
- The application relies on a local Ollama server. Ensure it's running before starting the CLI.
- If the
qwen3:8bmodel is unavailable, the app will auto-select a compatible deepseek model. - Conversation files are saved in JSON format for easy editing.
- Fork the repository
- Create a feature branch
- Submit a pull request with clear documentation of your changes
- Ollama: https://ollama.com
- Python: https://python.org