EN | δΈζ
SmartPrompt is a powerful Ruby gem that provides an elegant domain-specific language (DSL) for building intelligent applications with Large Language Models (LLMs). It enables Ruby programs to seamlessly interact with various LLM providers while maintaining clean, composable, and highly customizable code architecture.
- OpenAI API Compatible: Full support for OpenAI GPT models and compatible APIs
- Llama.cpp Integration: Direct integration with local Llama.cpp servers
- Extensible Adapters: Easy-to-extend adapter system for new LLM providers
- Unified Interface: Same API regardless of the underlying LLM provider
- Worker-based Tasks: Define reusable workers for specific AI tasks
- Template System: ERB-based prompt templates with parameter injection
- Conversation Management: Built-in conversation history and context management
- Streaming Support: Real-time response streaming for better user experience
- Tool Calling: Native support for function calling and tool integration
- Retry Logic: Robust error handling with configurable retry mechanisms
- Embeddings: Text embedding generation for semantic search and RAG applications
- Configuration-driven: YAML-based configuration for easy deployment management
- Comprehensive Logging: Detailed logging for debugging and monitoring
- Error Handling: Graceful error handling with custom exception types
- Performance Optimized: Efficient resource usage and response caching
- Thread Safe: Safe for concurrent usage in multi-threaded applications
Add to your Gemfile:
gem 'smart_prompt'Then execute:
$ bundle installOr install directly:
$ gem install smart_promptCreate a YAML configuration file (config/smart_prompt.yml):
# Adapter definitions
adapters:
openai: OpenAIAdapter
# LLM configurations
llms:
SiliconFlow:
adapter: openai
url: https://api.siliconflow.cn/v1/
api_key: ENV["APIKey"]
default_model: Qwen/Qwen2.5-7B-Instruct
llamacpp:
adapter: openai
url: http://localhost:8080/
ollama:
adapter: openai
url: http://localhost:11434/
default_model: deepseek-r1
deepseek:
adapter: openai
url: https://api.deepseek.com
api_key: ENV["DSKEY"]
default_model: deepseek-reasoner
# Default settings
default_llm: SiliconFlow
template_path: "./templates"
worker_path: "./workers"
logger_file: "./logs/smart_prompt.log"Create template files in your templates/ directory:
templates/chat.erb:
You are a helpful assistant. Please respond to the following question:
Question: <%= question %>
Context: <%= context || "No additional context provided" %>Create worker files in your workers/ directory:
workers/chat_worker.rb:
SmartPrompt.define_worker :chat_assistant do
# Use a specific LLM
use "SiliconFlow"
model "deepseek-ai/DeepSeek-V3"
# Set system message
sys_msg("You are a helpful AI assistant.", params)
# Use template with parameters
prompt(:chat, {
question: params[:question],
context: params[:context]
})
# Send message and return response
send_msg
endrequire 'smart_prompt'
# Initialize engine with config
engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
# Execute worker
result = engine.call_worker(:chat_assistant, {
question: "What is machine learning?",
context: "We're discussing AI technologies"
})
puts result# Define streaming worker
SmartPrompt.define_worker :streaming_chat do
use "deepseek"
model "deepseek-chat"
sys_msg("You are a helpful assistant.")
prompt(params[:message])
send_msg
end
# Use with streaming
engine.call_worker_by_stream(:streaming_chat, {
message: "Tell me a story"
}) do |chunk, bytesize|
print chunk.dig("choices", 0, "delta", "content")
end# Define worker with tools
SmartPrompt.define_worker :assistant_with_tools do
use "SiliconFlow"
model "Qwen/Qwen3-235B-A22B"
tools = [
{
type: "function",
function: {
name: "get_weather",
description: "Get weather information for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state"
}
},
required: ["location"]
}
}
}
]
sys_msg("You can help with weather queries using available tools.", params)
prompt(params[:message])
params.merge(tools: tools)
send_msg
endSmartPrompt.define_worker :conversational_chat do
use "deepseek"
model "deepseek-chat"
sys_msg("You are a helpful assistant that remembers conversation context.")
prompt(params[:message], with_history: true)
send_msg
endSmartPrompt.define_worker :text_embedder do
use "SiliconFlow"
model "BAAI/bge-m3"
prompt params[:text]
embeddings(params[:dimensions] || 1024)
end
# Usage
embeddings = engine.call_worker(:text_embedder, {
text: "Convert this text to embeddings",
dimensions: 1024
})SmartPrompt follows a modular architecture:
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Application β β SmartPrompt β β LLM Provider β
β βββββΊβ Engine βββββΊβ (OpenAI/ β
β β β β β Llama.cpp) β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
ββββββββββΌβββββββββ
β β β
βββββΌββββ ββββΌβββ βββββΌβββββ
βWorkersβ βConv.β βTemplateβ
β β βMgmt β β System β
βββββββββ βββββββ ββββββββββ
- Engine: Central orchestrator managing configuration, adapters, and workers
- Workers: Reusable task definitions with embedded business logic
- Conversation: Context and message history management
- Adapters: LLM provider integrations (OpenAI, Llama.cpp, etc.)
- Templates: ERB-based prompt template system
adapters:
openai: "OpenAIAdapter" # For OpenAI APIllms:
model_name:
adapter: "adapter_name"
api_key: "your_api_key" # Can use ENV['KEY_NAME']
url: "https://api.url"
model: "model_identifier"
temperature: 0.7
# Additional provider-specific optionstemplate_path: "./templates" # Directory for .erb templates
worker_path: "./workers" # Directory for worker definitions
logger_file: "./logs/app.log" # Log file locationRun the test suite:
bundle exec rake testFor development, you can use the console:
bundle exec bin/console# config/initializers/smart_prompt.rb
class SmartPromptService
def self.engine
@engine ||= SmartPrompt::Engine.new(
Rails.root.join('config', 'smart_prompt.yml')
)
end
def self.chat(message, context: nil)
engine.call_worker(:chat_assistant, {
question: message,
context: context
})
end
end
# In your controller
class ChatController < ApplicationController
def create
response = SmartPromptService.chat(
params[:message],
context: session[:conversation_context]
)
render json: { response: response }
end
endclass LLMProcessingJob < ApplicationJob
def perform(task_type, parameters)
engine = SmartPrompt::Engine.new('config/smart_prompt.yml')
result = engine.call_worker(task_type.to_sym, parameters)
# Process result...
NotificationService.send_completion(result)
end
end- Chatbots and Conversational AI: Build sophisticated chatbots with context awareness
- Content Generation: Automated content creation with template-driven prompts
- Code Analysis: AI-powered code review and documentation generation
- Customer Support: Intelligent ticket routing and response suggestions
- Data Processing: LLM-powered data extraction and transformation
- Educational Tools: AI tutors and learning assistance systems
- Additional LLM provider adapters (Anthropic Claude, Google PaLM)
- Visual prompt builder and management interface
- Enhanced caching and performance optimizations
- Integration with vector databases for RAG applications
- Built-in evaluation and testing framework for prompts
- Distributed worker execution support
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -am 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE.txt file for details.
- Built with β€οΈ by the SmartPrompt team
- Inspired by the need for elegant LLM integration in Ruby applications
- Thanks to all contributors and the Ruby community
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π§ Email: [email protected]
SmartPrompt - Making LLM integration in Ruby applications simple, powerful, and elegant.