Thanks to visit codestin.com
Credit goes to github.com

Skip to content

plartemyev/hal6000

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 LLM Agent Chat CLI

📌 Overview

This is an experimental project with the primary goal of learning to build tools for LLMs. Maybe it would become a DIY tool, handy for surviving these interesting times.
This is a command-line interface (CLI) application that interacts with a local Ollama model (e.g., qwen3:8b, gpt-oss:20b, huihui_ai/jan-nano-abliterated:4b, ...) to provide an AI chat experience. It supports streaming responses, tool execution, and conversation history management via /save and /load commands. The application is built in Python with a virtual environment, and logging.

🧰 Features

  • Interactive CLI for natural language conversations
  • Model interaction with Ollama
  • Tool execution during conversations
  • Conversation history management
  • Debug mode for verbose logging
  • Customizable toolset

📦 Requirements

Install dependencies via pip install -r requirements.txt:

pip install -r requirements.txt

Dependencies:

  • ollama (for model interaction)

📦 Installation

  1. Clone the repository:

     git clone <repository-url>
     cd <project-folder>
  2. Set up the virtual environment:
    Linux/macOS

    python -m venv .venv
    source .venv/bin/activate

    Windows

     python -m venv .venv
     .venv\Scripts\activate #
  3. Install dependencies:

     pip install -r requirements.txt
  4. Ensure Ollama is running:

     ollama serve
  5. Pull the model (if not already present):

     ollama pull qwen3:8b

🚀 Usage

Run the CLI with optional parameters:

python cli.py --model <model-name> --host <host:port>

Replace <model-name> with your desired model (e.g., gpt-oss:20b) and <host:port> with the Ollama server address (e.g., 192.168.1.42:11434). If not specified, defaults are used: --model qwen3:8b and --host 127.0.0.1:11434.

📝 Available Commands

  • /clear: Clear conversation history
  • /exit: Exit the application
  • /help: Show this help message
  • /save <filename>: Save the conversation to a file (e.g., chat.json)
  • /load <filename>: Load a saved conversation

🔄 Example Interaction

💬 You: Analyze project in directory /home/pasha/src/ZenStates-Linux Describe it to me.
🤖 Agent: Okay, let's tackle this user's request. They want me to analyze the project in 
the directory /home/pasha/src/ZenStates-Linux, read all relevant source code and
configuration files, and describe it to them. The tools available are
list_directory, read_file, and write_file.
...
🔧 Executing tools...
Tool list_directory result: Contents of directory '/home/pasha/src/ZenStates-Linux'
...
🤖 Agent: The directory contains several files: .gitignore, LICENSE, README.md,
disable_c6, togglecode.py, zenstates.py, zenstates.service, and a .git folder.
...
🔧 Executing tools...
Tool read_file result: Contents of file '/home/pasha/src/ZenStates-Linux/README.md'
...
🤖 Agent: The **ZenStates-Linux** project is a collection of tools for advanced power state
management on Ryzen processors and compatible motherboards. Here's the analysis of its
components:
### 🔧 **Key Tools & Functionality**

#### 1. **`zenstates.py` (P-State Control)**
- **Purpose**: Dynamically adjust Ryzen processor P-States (performance states) to balance
power efficiency and performance.
- **Requirements**:
  - Root access.
  - `msr` kernel module loaded (`modprobe msr`).
- **Features**:
  - List all P-States (`-l`).
  - Set specific P-State (`-p`).
  - Enable/disable P-State (`--enable`, `--disable`).
  - Adjust FID/DID/VID (microarchitecture parameters via MSR).
  - Enable/disable C6 state (`--c6-enable`, `--c6-disable`).
...

🛠️ Customization

🔧 Tools

Extend functionality by editing tools.py:

  • Add new tools (e.g., search_web, calculate)
  • Define execute_tool() logic for each function

🧪 Debugging

Enable debug mode by setting the DEBUG environment variable:
Output would be saved to app.log

DEBUG=1 python cli.py

📜 License

This project is licensed under the MIT License. See the LICENSE file for details.

📌 Notes

  • The application relies on a local Ollama server. Ensure it's running before starting the CLI.
  • If the qwen3:8b model is unavailable, the app will auto-select a compatible deepseek model.
  • Conversation files are saved in JSON format for easy editing.

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Submit a pull request with clear documentation of your changes

📚 Credits

About

Experiments with LLMs and tools

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages