2 unstable releases
Uses new Rust 2024
| 0.2.0 | Dec 26, 2025 |
|---|---|
| 0.1.0 | Dec 14, 2025 |
#61 in Machine learning
1MB
1K
SLoC
Pronounced: /tʃeɪs/ (chase)
CHamal's AutoComplete Engine
[!WARNING] CHACE is still at early development so bugs and breaking changes should be expected.
Overview
CHACE is a Rust-based engine designed for controlled AI-assisted code completion. Traditional code completion tools like GitHub copilot or agents like cursor can hallucinate easily on large codebases when the context is misleading or too much. To mitigate that, CHACE:
- targets function declerations with empty bodies at the cursor position
- Extracts the function decleration and documentation (docstrings)
- Sends only the minimal context to the LLM
- Retrive only the function implementations from the LLM for optimal token efficiency
This approach keeps the AI focused on the specific task, reduces token usage, maintains precision and efficiency and produces more predictable results.
Inspiration
The idea is heavily inspired by ThePrimeagen's new approach to AI-assisted coding. While his implementation is built with Lua and requires Opencode to work (not yet open-sourced), I took a different architectural approach: built as a standalone Rust binary that operates independently and can be integrated into any editor through plugins. This design ensures the core application to be editor-agnostic, lightweight, and easy to extend to other development environments.
Architecture
CHACE runs as a Unix socket server (/tmp/chace.sock) that accepts JSON requests containing source code and cursor position. The engine:
- Parses the source code using Tree-sitter
- Locates empty functions at the cursor
- Sends function declerations to the configured LLM backend
- Returns the generated function body with precise byte offsets
Supported LLM Backends
- Google Gemini (gemini-2.5-flash)
- Groq (gpt-oss-20b)
Language Support
Currently supports:
- Rust
- TypeScript/ TypeScript XML
- JavaScript/ JavaScript XML
Installation
Install via Cargo
cargo install chace
Configuration
Set the required environment variables:
export GEMINI_API_KEY="your-gemini-api-key"
export GROQ_API_KEY="your-groq-api-key"
Usage
Running the Server
chace
The server listens on /tmp/chace.sock and handles concurrent connections.
Request Format
Send JSON-encoded requests via the Unix socket:
{
"source_code": "fn add(a: i32, b: i32) -> i32 {\n\n}",
"cursor_byte": 35,
"backend": "Gemini",
"context_snippets": ["struct User {\n id: u32,\n name: String\n}"]
}
Optional Fields:
context_snippets(array of strings): Additional code snippets to provide context for better code generation
Response Format
{
"start_byte": 35,
"end_byte": 36,
"body": " a + b",
"usage": {
"prompt_tokens": 300,
"completion_tokens": 1200,
"total_tokens": 1500
}
"error": null
}
Optional Fields:
error(string or null): Error message if the request failed, null on success
IDE Integration
CHACE is designed to be integrated with IDEs via plugins. See chace.nvim for reference.
Protocol
CHACE uses a line-delimited JSON protocol via a Unix socket:
- Each request is a single JSON object terminated by a newline
- Each response is a single JSON object terminated by a newline
- Multiple requests can be sent over the same connection
- Connections are handled asynchronously
Development
Build from Source
git clone https://github.com/chamal1120/chace.git
cd chace
cargo build --release
Adding Language Support
To add support for a new language:
- Add the Tree-sitter grammar to
Cargo.toml - Create a new backend in
src/languages/ - Implement the
LanguageStandardtrait with the following methods: - Return
FunctionInfoobjects containing: - Update the request handler in
main.rs
Adding LLM Backends
To add a new LLM provider:
- Create a new module in
src/ai/ - Implement the
LLMBackendtrait - Add initialization in
main.rs - Update the backend selection logic
Testing
refer tests.
License
MIT License - see LICENSE for details.
Dependencies
~36–53MB
~1M SLoC