4 releases
Uses new Rust 2024
| new 0.2.0 | Dec 28, 2025 |
|---|---|
| 0.1.2 | Oct 4, 2025 |
| 0.1.1 | Oct 4, 2025 |
| 0.1.0 | Oct 3, 2025 |
#1282 in Development tools
17KB
58 lines
gh_models 🧠
gh_models is a Rust client for accessing GitHub-hosted AI models via the https://models.github.ai API. It provides a simple interface for chat-based completions, similar to OpenAI’s API, but powered by GitHub’s model infrastructure.
✨ Features
- Chat completion support for GitHub-hosted models (e.g.
openai/gpt-4o) - Easy authentication via GitHub personal access token (PAT)
- Async-ready with
tokio - Clean and ergonomic API
🚀 Getting Started
1. Install
Add to your Cargo.toml:
gh_models = "0.2.0"
2. Authenticate
Set your GitHub token as an environment variable:
export GITHUB_TOKEN=your_personal_access_token
🔐 Your token must have the Models permission enabled.
You can generate a PAT in your GitHub settings. Managing your personal access tokens
Basic Example (Single‑Turn)
use gh_models::{GHModels, types::ChatMessage};
use std::env;
#[tokio::main]
async fn main() {
let token = env::var("GITHUB_TOKEN").expect("Missing GITHUB_TOKEN");
let client = GHModels::new(token);
let messages = vec![
ChatMessage {
role: "system".into(),
content: "You are a helpful assistant.".into(),
},
ChatMessage {
role: "user".into(),
content: "What is the capital of France?".into(),
},
];
let response = client
.chat_completion("openai/gpt-4o", &messages, 1.0, 4096, 1.0)
.await
.unwrap();
println!("{}", response.choices[0].message.content);
}
Run it:
cargo run --example simple_chat
Multi‑Turn Chat Example (REPL)
This example shows how to maintain conversation history and interact with the model in a loop.
use gh_models::{GHModels, types::ChatMessage};
use std::env;
use std::io::{self, Write};
#[tokio::main]
async fn main() {
let token = env::var("GITHUB_TOKEN").expect("Missing GITHUB_TOKEN");
let client = GHModels::new(token);
// Start with a system prompt
let mut messages = vec![
ChatMessage {
role: "system".into(),
content: "You are a helpful assistant.".into(),
}
];
loop {
print!("You: ");
io::stdout().flush().unwrap();
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
let input = input.trim().to_string();
if input == "exit" {
break;
}
// Add user message
messages.push(ChatMessage {
role: "user".into(),
content: input.clone(),
});
// Call the model
let response = client
.chat_completion("openai/gpt-4o", &messages, 1.0, 4096, 1.0)
.await
.unwrap();
let reply = response.choices[0].message.content.clone();
println!("Assistant: {}", reply);
// Add assistant reply to history
messages.push(ChatMessage {
role: "assistant".into(),
content: reply,
});
}
}
📚 API Overview
GHModels::new(token: String)
Creates a new client using your GitHub token.
chat_completion(...)
Sends a chat request to the model endpoint.
Parameters:
model: Model name (e.g."openai/gpt-4o")messages: A slice ofChatMessage(&[ChatMessage])temperature: Sampling temperaturemax_tokens: Maximum output tokenstop_p: Nucleus sampling parameter
🛠️ Development
Clone the repo and run examples locally:
git clone https://github.com/pjdur/gh_models
cd gh_models
cargo run --example simple_chat
cargo run --example multi_turn
📄 License
MIT © Pjdur
🤝 Contributing
Pull requests welcome!
If you’d like to add streaming support, better error handling, or model introspection, feel free to open an issue or PR.
Dependencies
~6–22MB
~228K SLoC