Releases: crmne/ruby_llm
1.8.0
RubyLLM 1.8.0: Video Support & Content Moderation 🎥🛡️
Major feature release bringing video file support for multimodal models and content moderation capabilities to ensure safer AI interactions.
🎥 Video File Support
Full video file support for models with video capabilities:
# Local video files
chat = RubyLLM.chat(model: "gemini-2.5-flash")
response = chat.ask("What happens in this video?", with: "video.mp4")
# Remote video URLs (with or without extensions)
response = chat.ask("Describe this video", with: "https://example.com/video")
# Multiple attachments including video
response = chat.ask("Compare these", with: ["image.jpg", "video.mp4"])
Features:
- Automatic MIME type detection for video formats
- Support for remote videos without file extensions
- Seamless integration with existing attachment system
- Full support for Gemini and VertexAI video-capable models
🛡️ Content Moderation
New content moderation API to identify potentially harmful content before sending to LLMs:
# Basic moderation
result = RubyLLM.moderate("User input text")
puts result.flagged? # => true/false
puts result.flagged_categories # => ["harassment", "hate"]
# Integration pattern - screen before chat
def safe_chat(user_input)
moderation = RubyLLM.moderate(user_input)
return "Content not allowed" if moderation.flagged?
RubyLLM.chat.ask(user_input)
end
# Check specific categories
result = RubyLLM.moderate("Some text")
puts result.category_scores["harassment"] # => 0.0234
puts result.category_scores["violence"] # => 0.0012
Features:
- Detects sexual, hate, harassment, violence, self-harm, and other harmful content
- Convenience methods:
flagged?
,flagged_categories
,category_scores
- Currently supports OpenAI's moderation API
- Extensible architecture for future providers
- Configurable default model (defaults to
omni-moderation-latest
)
🐛 Bug Fixes
Rails Inflection Issue
- Fixed critical bug where Rails apps using
Llm
module/namespace would break due to inflection conflicts - RubyLLM now properly isolates its inflections
Migration Foreign Key Errors
- Fixed install generator creating migrations with foreign key references to non-existent tables
- Migrations now create tables first, then add references in correct order
- Prevents "relation does not exist" errors in PostgreSQL and other databases
Model Registry Improvements
- Fixed
Models.resolve
instance method delegation - Fixed helper methods to return all models supporting specific modalities
image_models
,audio_models
, andembedding_models
now correctly include all capable models
📚 Documentation
- Added comprehensive moderation guide with Rails integration examples
- Updated video support documentation with examples
- Clarified version requirements in documentation
Installation
gem 'ruby_llm', '1.8.0'
Upgrading from 1.7.x
bundle update ruby_llm
All changes are backward compatible. New features are opt-in.
Merged PRs
- Fix create_table migrations to prevent foreign key errors (#409) by @matiasmoya in #411
- Fix: Add resolve method delegation from Models instance to class by @kieranklaassen in #407
- Models helps should return all supporting modalities by @dacamp in #408
- Add Content Moderation Feature by @iraszl in #383
- Add video file support by @altxtech in #405 and @arnodirlam in #260
New Contributors
- @matiasmoya made their first contribution in #411
- @dacamp made their first contribution in #408
- @iraszl made their first contribution in #383
- @altxtech made their first contribution in #405
- @arnodirlam made their first contribution in #260
Full Changelog: 1.7.1...1.8.0
1.7.1
RubyLLM 1.7.1: Generator Fixes & Enhanced Upgrades 🔧
Bug fixes and improvements for Rails generators, with special focus on namespaced models and automatic migration of existing apps.
🐛 Critical Fixes
Namespaced Model Support
The generators now properly handle namespaced models throughout:
# Now works correctly with namespaced models
rails g ruby_llm:install chat:LLM::Chat message:LLM::Message model:LLM::Model
rails g ruby_llm:upgrade_to_v1_7 chat:Assistant::Chat message:Assistant::Message
Fixed issues:
- Invalid table names like
:assistant/chats
→:assistant_chats
✅ - Model migration failures with namespaced models ✅
- Foreign key migrations now handle custom table names correctly ✅
- Namespace modules automatically created with
table_name_prefix
✅
✨ Automatic acts_as API Migration
The upgrade generator now automatically converts from the old acts_as API to the new one:
# BEFORE running upgrade generator (OLD API)
class Conversation < ApplicationRecord
acts_as_chat message_class: 'ChatMessage', tool_call_class: 'AIToolCall'
end
class ChatMessage < ApplicationRecord
acts_as_message chat_class: 'Conversation', chat_foreign_key: 'conversation_id'
end
# AFTER running upgrade generator (NEW API)
class Conversation < ApplicationRecord
acts_as_chat messages: :chat_messages, message_class: 'ChatMessage', model: :model
end
class ChatMessage < ApplicationRecord
acts_as_message chat: :conversation, chat_class: 'Conversation', tool_calls: :ai_tool_calls, tool_call_class: 'AIToolCall', model: :model
end
The generator:
- Converts from old
*_class
parameters to new association-based parameters - Adds the new
model
association to all models - Preserves custom class names and associations
- Handles both simple and complex/namespaced models
No manual changes needed - the generator handles the complete API migration automatically! 🎉
🏗️ Generator Architecture Improvements
DRY Generator Code
Created a shared GeneratorHelpers
module to eliminate duplication between generators:
- Shared
acts_as_*
declaration logic - Common database detection methods
- Unified namespace handling
- Consistent table name generation
Better Rails Conventions
- Generators reorganized into proper subdirectories
- Private methods moved to conventional location
- Follows Rails generator best practices
- Cleaner, more maintainable code
🚨 Troubleshooting Helper
Added clear troubleshooting for the most common upgrade issue:
# If you see: "undefined local variable or method 'acts_as_model'"
# Add this to config/application.rb BEFORE your Application class:
RubyLLM.configure do |config|
config.use_new_acts_as = true
end
module YourApp
class Application < Rails::Application
# ...
end
end
The upgrade generator now shows this warning proactively and documentation includes a dedicated troubleshooting section.
🔄 Migration Improvements
- Fixed instance variable usage in migration templates
- Better handling of existing Model tables during upgrade
- Initializer creation if missing during upgrade
- Simplified upgrade instructions pointing to migration guide
Installation
gem 'ruby_llm', '1.7.1'
Upgrading from 1.7.0
Just update your gem - all fixes are backward compatible:
bundle update ruby_llm
Upgrading from 1.6.x
Use the improved upgrade generator:
rails generate ruby_llm:upgrade_to_v1_7
rails db:migrate
The generator now handles everything automatically, including updating your model files!
Merged PRs
- Fix namespaced model table names in upgrade generator by @willcosgrove in #398
- Fix namespaced models in Model migration and foreign key migrations by @willcosgrove in #399
New Contributors
- @willcosgrove made their first contribution in #398
Full Changelog: 1.7.0...1.7.1
1.7.0
RubyLLM 1.7: Rails Revolution & Vertex AI 🚀
Major Rails integration overhaul bringing database-backed models, UI generators, and a more intuitive acts_as API. Plus Google Cloud Vertex AI support, regional AWS Bedrock, and streamlined installation!
🌟 Google Cloud Vertex AI Support
Full Vertex AI provider integration with dynamic model discovery:
# Add to your Gemfile:
gem "googleauth" # Required for Vertex AI authentication
# Configure Vertex AI:
RubyLLM.configure do |config|
config.vertexai_project_id = "your-project"
config.vertexai_location = "us-central1"
end
# Access Gemini and other Google models through Vertex AI
chat = RubyLLM.chat(model: "gemini-2.5-pro", provider: :vertexai)
response = chat.ask("What can you do?")
Features:
- Dynamic model fetching from Vertex AI API with pagination
- Automatic discovery of Gemini foundation models
- Metadata enrichment from Parsera API
- Full chat and embeddings support
- Seamless integration with existing Gemini provider
- Uses Application Default Credentials (ADC) for authentication
🎉 New Rails-Like acts_as API
The Rails integration gets a massive upgrade with a more intuitive, Rails-like API:
# OLD way (still works, deprecated in v2.0)
class Chat < ApplicationRecord
acts_as_chat message_class: 'Message', tool_call_class: 'ToolCall'
end
# NEW way - use association names as primary parameters!
class Chat < ApplicationRecord
acts_as_chat messages: :messages, model: :model
end
class Message < ApplicationRecord
acts_as_message chat: :chat, tool_calls: :tool_calls, model: :model
end
Two-Command Upgrade
Existing apps can upgrade seamlessly:
# Step 1: Run the upgrade generator
rails generate ruby_llm:upgrade_to_v1_7
# Step 2: Run migrations
rails db:migrate
That's it! The upgrade generator:
- Creates the models table if needed
- Automatically adds
config.use_new_acts_as = true
to your initializer - Migrates your existing data to use foreign keys
- Preserves all your data (old string columns renamed to
model_id_string
)
🖥️ Complete Chat UI Generator
Build a full chat interface with one command:
# Generate complete chat UI with Turbo streaming
rails generate ruby_llm:chat_ui
This creates:
- Controllers: Chat and message controllers with Rails best practices
- Views: Clean HTML views for chat list, creation, and messaging
- Models page: Browse available AI models
- Turbo Streams: Real-time message updates
- Background job: Streaming AI responses
- Model selector: Choose models in chat creation
The UI is intentionally simple and clean - perfect for customization!
💾 Database-Backed Model Registry
Models are now first-class ActiveRecord objects with rich metadata:
# Chat.create! has the same interface as RubyLLM.chat (PORO)
chat = Chat.create! # Uses default model from config
chat = Chat.create!(model: "gpt-4o-mini") # Specify model
chat = Chat.create!(model: "claude-3-5-haiku", provider: "bedrock") # Cross-provider
chat = Chat.create!(
model: "experimental-llm-v2",
provider: "openrouter",
assume_model_exists: true # Creates Model record if not found
)
# Access rich model metadata through associations
chat.model.context_window # => 128000
chat.model.capabilities # => ["streaming", "function_calling", "structured_output"]
chat.model.pricing["text_tokens"]["standard"]["input_per_million"] # => 2.5
# Works with Model objects too
model = Model.find_by(model_id: "gpt-4o")
chat = Chat.create!(model: model)
# Refresh models from provider APIs
Model.refresh! # Populates/updates models table from all configured providers
The install generator creates a Model model by default:
# Custom model names supported
rails g ruby_llm:install chat:Discussion message:Comment model:LLModel
🌍 AWS Bedrock Regional Support
Cross-region inference now works correctly in all AWS regions:
# EU regions now work!
RubyLLM.configure do |config|
config.bedrock_region = "eu-west-3"
end
# Automatically uses correct region prefix:
# - EU: eu.anthropic.claude-3-sonnet...
# - US: us.anthropic.claude-3-sonnet...
# - AP: ap.anthropic.claude-3-sonnet...
# - CA: ca.anthropic.claude-3-sonnet...
Thanks to @elthariel for the contribution! (#338)
🎵 MP3 Audio Support Fixed
OpenAI's Whisper API now correctly handles MP3 files:
# Previously failed with MIME type errors
chat.add_attachment("audio.mp3")
response = chat.ask("Transcribe this audio") # Now works!
The fix properly converts audio/mpeg
MIME type to the mp3
format string OpenAI expects. (#390)
🚀 Performance & Developer Experience
Simplified Installation
The post-install message is now concise and helpful, pointing to docs instead of overwhelming with text.
Better Generator Experience
All generators now support consistent interfaces:
# All use the same pattern
rails g ruby_llm:install chat:Chat message:Message
rails g ruby_llm:upgrade_to_v1_7 chat:Chat message:Message
rails g ruby_llm:chat_ui
ActiveStorage Integration
The install generator now automatically:
- Installs ActiveStorage if not present
- Configures RubyLLM for attachment support
- Ensures smooth multimodal experiences out of the box
🔧 Fixes & Improvements
Provider Enhancements
- Local provider models:
Models.refresh!
now supports Ollama and GPUStack with proper capability mapping - Provider architecture: Providers no longer call
RubyLLM.models.find
internally (#366) - Tool calling: Fixed OpenAI tool calls with missing function.arguments (#385, thanks @elthariel!)
- Streaming callbacks:
on_new_message
now fires before API request, not after first chunk (#367)
Documentation & Testing
- Documentation variables: Model names now use variables for easier updates
- IRB compatibility:
#instance_variables
method now public forls
command (#374, thanks @matijs!) - Test improvements: Fixed CI issues with acts_as modules and database initialization
- VCR enhancements: Better VertexAI recording and cassette management
Breaking Changes
with_params Behavior
with_params
now takes precedence over internal defaults, allowing full control:
# You can now override ANY parameter
chat.with_params(max_tokens: 100) # This now works!
chat.with_params(tools: [web_search_tool]) # Provider-specific features
Set RUBYLLM_DEBUG=true
to see exactly what's being sent to the API.
Installation
gem 'ruby_llm', '1.7.0'
Upgrading from 1.6.x
- Update your Gemfile
- Run
bundle update ruby_llm
- Run
rails generate ruby_llm:upgrade_to_v1_7
- Run
rails db:migrate
- Enjoy the new features! 🎉
Full backward compatibility maintained - the old acts_as API continues working with a deprecation warning.
Merged PRs.
- Add missing code block ending in the docs by @AlexVPopov in #368
- Make overrides of #instance_variables method public by @mvz in #374
- Handle missing function.arguments in OpenAI tool_calls by @elthariel in #385
- Inference regions by @ESegundoRolon in #338
New Contributors
- @AlexVPopov made their first contribution in #368
- @mvz made their first contribution in #374
- @elthariel made their first contribution in #385
- @ESegundoRolon made their first contribution in #338
Full Changelog: 1.6.4...1.7.0
1.6.4
RubyLLM 1.6.4: Multimodal Tools & Better Schemas 🖼️
Maintenance release bringing multimodal tool responses, improved rake tasks, and important fixes for Gemini schema conversion. Plus better documentation and developer experience!
🖼️ Tools Can Now Return Files and Images
Tools can now return rich content with attachments, not just text! Perfect for screenshot tools, document generators, and visual analyzers:
class ScreenshotTool < RubyLLM::Tool
description "Takes a screenshot and returns it"
param :url, desc: "URL to screenshot"
def execute(url:)
screenshot_path = capture_screenshot(url) # Your screenshot logic
# Return a Content object with text and attachments
RubyLLM::Content.new(
"Screenshot of #{url} captured successfully",
[screenshot_path] # Can be file path, StringIO, or ActiveStorage blob
)
end
end
# The LLM can now see and analyze the screenshot
chat = RubyLLM.chat.with_tool(ScreenshotTool)
response = chat.ask("Take a screenshot of ruby-lang.org and describe what you see")
This opens up powerful workflows:
- Visual debugging: Screenshot tools that capture and analyze UI states
- Document generation: Tools that create PDFs and return them for review
- Data visualization: Generate charts and have the LLM interpret them
- Multi-step workflows: Chain tools that produce and consume visual content
Works with all providers that support multimodal content.
🔧 Fixed: Gemini Schema Conversion
Gemini's structured output was not preserving all the schema fields and integer schemas were converted to number. Now the conversion logic correctly handles:
# Preserve description
schema = {
type: 'object',
description: 'An object',
properties: {
example: {
type: "string",
description: "a brief description about the person's time at the conference"
}
},
required: ['example']
}
# Define schema with both number and integer types
schema = {
type: 'object',
properties: {
number1: {
type: 'number',
},
number2: {
type: 'integer',
}
}
}
Also added tests to cover simple and complex schemas, nested objects and arrays, all constraint attributes, nullable fields, descriptions, property ordering for objects.
Thanks to @BrianBorge for reporting and working on the initial PR.
🛠️ Developer Experience: Improved Rake Tasks
Consolidated Model Management
All model-related tasks are now streamlined and better organized:
# Default task now runs overcommit hooks + model updates
bundle exec rake
# Update models, generate docs, and create aliases in one command
bundle exec rake models
# Individual tasks still available
bundle exec rake models:update # Fetch latest models from providers
bundle exec rake models:docs # Generate model documentation
bundle exec rake models:aliases # Generate model aliases
The tasks have been refactored from 3 separate files into a single, well-organized models.rake
file following Rails conventions.
Release Preparation
New comprehensive release preparation task:
# Prepare for release: refresh cassettes, run hooks, update models
bundle exec rake release:prepare
This task:
- Automatically refreshes stale VCR cassettes (>1 day old)
- Runs overcommit hooks for code quality
- Updates models, docs, and aliases
- Ensures everything is ready for a clean release
Cassette Management
# Verify cassettes are fresh
bundle exec rake release:verify_cassettes
# Refresh stale cassettes automatically
bundle exec rake release:refresh_stale_cassettes
📚 Documentation Updates
- Redirect fix:
/installation
now properly redirects to/getting-started
- Badge refresh: README badges updated to bust GitHub's cache
- Async pattern fix: Corrected supervisor pattern example in agentic workflows guide to avoid "Cannot wait on own fiber!" errors
🧹 Additional Updates
- Appraisal gemfiles updated: All Rails version test matrices refreshed
- Test coverage: New specs for multimodal tool responses
- Provider compatibility: Verified with latest API versions
Installation
gem 'ruby_llm', '1.6.4'
Full backward compatibility maintained. The multimodal tool support is opt-in - existing tools continue working as before.
Full Changelog: 1.6.3...1.6.4
1.6.3
RubyLLM 1.6.3: Smarter Defaults & Easier Contributing 🎯
Maintenance release improving model temperature defaults and making it easier to contribute to the project.
🌡️ Models Now Use Their Own Temperature Defaults
We were overriding model defaults with temperature: 0.7
everywhere. But different models have different optimal defaults - why fight them?
# Before 1.6.3: Always forced temperature to 0.7
chat = RubyLLM.chat(model: 'claude-3-5-haiku') # temperature: 0.7
# Now: Models use their native defaults
chat = RubyLLM.chat(model: 'claude-3-5-haiku') # temperature: (model's default)
chat = RubyLLM.chat(model: 'gpt-4.1-nano') # temperature: (model's default)
# You can still override when needed
chat = RubyLLM.chat(model: 'claude-3-5-haiku', temperature: 0.9)
Each provider knows the best default for their models. OpenAI might default to 1.0, Anthropic to something else - we now respect those choices. Fixes #349.
🤝 Easier Contributing: Overcommit Improvements
The RakeTarget pre-commit hook was making it difficult for contributors to submit PRs. As noted in discussion #335, the hook would run models:update
which deleted models if you didn't have API keys for all providers!
# Removed from .overcommit.yml:
RakeTarget:
enabled: true
command: ['bundle', 'exec', 'rake']
targets: ['models:update', 'models:docs', 'aliases:generate']
Now contributors can:
- Submit PRs without having all provider API keys
- Make targeted changes without unintended side effects
- Focus on their contributions without fighting the tooling
The model registry is now maintained centrally rather than requiring every contributor to have complete API access.
🧹 Additional Updates
- Test cassettes refreshed: All VCR cassettes updated for reliable testing
- Model registry updated: Latest models from all providers
Installation
gem 'ruby_llm', '1.6.3'
Full backward compatibility maintained. The temperature change means your models might behave slightly differently (using their native defaults), but you can always explicitly set temperature if you need the old behavior.
Full Changelog: 1.6.2...1.6.3
1.6.2
RubyLLM 1.6.2: Thinking Tokens & Performance 🧠
Quick maintenance release fixing Gemini's thinking token counting and bringing performance improvements. Plus we're removing capability gatekeeping - trust providers to know what they can do!
🧮 Fixed: Gemini Thinking Token Counting
Gemini 2.5 with thinking mode wasn't counting tokens correctly, leading to incorrect billing calculations:
# Before: Only counted candidatesTokenCount (109 tokens)
# Actual API response had:
# candidatesTokenCount: 109
# thoughtsTokenCount: 443
# => Should be 552 total!
# Now: Correctly sums both token types
chat = RubyLLM.chat(model: 'gemini-2.5-flash')
response = chat.ask('What is 2+2? Think step by step.')
response.output_tokens # => 552 (correctly summed)
This aligns with how all providers bill thinking/reasoning tokens - they're all output tokens. Fixes #346.
🚫 Capability Gatekeeping Removed
We were pre-checking if models support certain features before attempting to use them. But sometimes pre-emptive checks were getting in the way:
# Before 1.6.2: Pre-checked capabilities before attempting
chat.with_tool(MyTool) # => UnsupportedFunctionsError (without trying)
# Now: Let the provider handle it
chat.with_tool(MyTool) # Works if supported, provider errors if not
Why this approach is better:
- Direct feedback - Get the actual provider error, not our pre-emptive block
- Immediate support - New models and features work as soon as providers ship them
- Custom models - Fine-tuned and custom models aren't artificially limited
- Simpler flow - One less layer of validation between you and the provider
The provider knows what it can do. If it works, great! If not, you'll get a clear error from the source.
Same philosophy applies to structured output (with_schema
).
⚡ Performance Improvements
Thanks to @tagliala for introducing RuboCop Performance (#316), bringing multiple optimizations:
- More efficient string operations
- Better collection handling
- Optimized method calls
- Reduced object allocations
Every little bit helps when you're streaming thousands of tokens!
🐛 Additional Fixes
- Logging cleanup: Removed unnecessary "assuming model exists" debug logging after capability gatekeeping removal
- Test improvements: Real API tests for token counting verification
Installation
gem 'ruby_llm', '1.6.2'
Full backward compatibility maintained. If you're using Gemini with thinking mode, this update is recommended for accurate token counting.
Merged PRs
New Contributors
Full Changelog: 1.6.1...1.6.2
1.6.1
RubyLLM 1.6.1: Tool Choice Freedom and Error Recovery 🛠️
Quick maintenance release with important fixes for tool calling and error recovery. Shipped three days after 1.6.0 because why make you wait?
🎉 Milestone: 2,700+ GitHub stars and 1.9 million downloads! Thank you to our amazing community.
🔧 OpenAI Tool Choice Flexibility
OpenAI defaults to 'auto' when tools are present. We were hardcoding it, blocking your overrides:
# Before: Couldn't override tool_choice
chat.with_params(tool_choice: 'required') # Ignored!
# Now: Your choice wins
chat.with_params(tool_choice: 'required') # Works as expected
chat.with_params(tool_choice: 'none') # Disable tools temporarily
chat.with_params(tool_choice: { type: 'function', function: { name: 'specific_tool' }}) # Force specific tool
Thanks to @imtyM for catching this and fixing it in #336.
🔄 Orphaned Tool Message Cleanup
Rate limits can interrupt tool execution mid-flow, leaving orphaned tool result messages. This caused 'tool_use without tool_result' errors on retry:
# The problem flow:
# 1. Tool executes successfully
# 2. Tool result message saved
# 3. Rate limit hits before assistant response
# 4. Retry fails: orphaned tool result confuses the API
# Now: Automatic cleanup on error
chat.ask("Use this tool") # Rate limit? No problem. Orphaned messages cleaned up.
The fix handles both orphaned tool calls and tool results, ensuring clean conversation state after errors.
🔄 Tool Switching Mid-Conversation
New replace: true
parameter lets you completely switch or remove tool contexts during a conversation:
# Start with search tools
chat = RubyLLM.chat.with_tools([SearchTool, WikipediaTool])
chat.ask("Research Ruby's history")
# Switch to code tools for implementation
chat.with_tools([CodeWriterTool, TestRunnerTool], replace: true)
chat.ask("Now implement a Ruby parser") # Only code tools available
# Remove all tools for pure conversation
chat.with_tools(nil, replace: true)
chat.ask("Explain your implementation choices") # No tools, just reasoning
# Add review tools when needed
chat.with_tools([LinterTool, SecurityScanTool], replace: true)
chat.ask("Review the code for issues") # Only review tools available
Perfect for multi-phase workflows where different stages need different capabilities - or no tools at all.
🐛 Additional Fixes
- JRuby compatibility: Fixed test mocks for Ruby 3.1 compatibility
- Documentation: Fixed code example in tool documentation (thanks @tpaulshippy)
- Models update: Latest model registry from all providers
- GPT-5 temperature: Fixed unsupported temperature parameter for reasoning models (#339)
Installation
gem 'ruby_llm', '1.6.1'
Full backward compatibility maintained. If you're using OpenAI tools or seeing rate limit errors, this update is recommended.
Merged PRs
- Fix small bug in doc by @tpaulshippy in #340
- Fix: remove
tool_choice: auto
param for open ai by @imtyM in #336
New Contributors
Full Changelog: 1.6.0...1.6.1
1.6.0
RubyLLM 1.6.0: Custom Headers, Tool Control, and Provider Classes 🚀
Unlock provider beta features, build sophisticated agent systems, and enjoy a cleaner architecture. Plus a complete documentation overhaul with dark mode!
✨ Custom HTTP Headers
Access cutting-edge provider features that were previously off-limits:
# Enable Anthropic's beta streaming features
chat = RubyLLM.chat(model: 'claude-3-5-sonnet')
.with_headers('anthropic-beta' => 'fine-grained-tool-streaming-2025-05-14')
# Works with method chaining
response = chat.with_temperature(0.5)
.with_headers('X-Custom-Feature' => 'enabled')
.ask("Stream me some tool calls")
Your headers can't override authentication - provider security always wins. As it should be.
🛑 Tool Halt: Skip LLM Commentary (When Needed)
For rare cases where you need to skip the LLM's helpful summaries, tools can now halt continuation:
class SaveFileTool < RubyLLM::Tool
description "Save content to a file"
param :path, desc: "File path"
param :content, desc: "File content"
def execute(path:, content:)
File.write(path, content)
halt "Saved to #{path}" # Skip the "I've successfully saved..." commentary
end
end
Note: Sub-agents work perfectly without halt! The LLM's summaries are usually helpful:
class SpecialistAgent < RubyLLM::Tool
description "Delegate to specialist for complex questions"
param :question, desc: "The question to ask"
def execute(question:)
expert = RubyLLM.chat(model: 'claude-3-opus')
expert.ask(question).content # Works great without halt
# The router will summarize the expert's response naturally
end
end
Only use halt
when you specifically need to bypass the LLM's continuation.
🎯 New Models & Providers
Latest Models
- Opus 4.1 - Anthropic's most capable model now available
- GPT-OSS and GPT-5 series - OpenAI's new lineup
- Switch to
gpt-image-1
as default image generation model
OpenAI-Compatible Server Support
Need to connect to a server that still uses the traditional 'system' role?
RubyLLM.configure do |config|
config.openai_use_system_role = true # Use 'system' role for compatibility
config.openai_api_base = "http://your-server:8080/v1"
end
By default, RubyLLM uses 'developer' role (matching OpenAI's current API). Enable this for older servers.
🏗️ Provider Architecture Overhaul
Providers are now classes, not modules. What this means for you:
- No more credential hell - Ollama users don't need OpenAI keys
- Per-instance configuration - Each provider manages its own state
- Cleaner codebase - No global state pollution
# Before: "Missing configuration for OpenAI" even if you only use Ollama
# Now: Just works
chat = RubyLLM.chat(model: 'llama3:8b', provider: :ollama) # No OpenAI key required
🚂 Tool Callbacks
We have a new callback for tool results. Works great even in Rails!
class Chat < ApplicationRecord
acts_as_chat
# All methods available
chat.with_headers(...)
chat.on_tool_call { |call| ... }
chat.on_tool_result { |result| ... } # New callback!
end
📚 Documentation Overhaul
Complete documentation reorganization with:
- Dark mode that follows your system preferences
- New Agentic Workflows guide for building intelligent systems
- Four clear sections: Getting Started, Core Features, Advanced, Reference
- Tool limiting patterns for controlling AI behavior
Check it out at rubyllm.com
🔍 Better Debugging
New stream debug mode for when things go sideways:
# Via environment variable
RUBYLLM_STREAM_DEBUG=true rails server
# Or in config
RubyLLM.configure do |config|
config.log_stream_debug = true
end
Shows every chunk, accumulator state, and parsing decision. Invaluable for debugging streaming issues.
🐛 Bug Fixes
- JRuby fixes
- Rails callback chaining fixes
- Anthropic models no longer incorrectly claim structured output support
- Test suite improved with proper provider limitation handling
- Documentation site Open Graph images fixed
Installation
gem 'ruby_llm', '1.6.0'
Full backward compatibility. Your existing code continues to work. New features are opt-in.
Merged PRs
- Wire up on_tool_call when using acts_as_chat rails integration by @agarcher in #318
- Switch default image generation model to gpt-image-1 by @tpaulshippy in #321
- Update which OpenAI models are considered "reasoning" by @gjtorikian in #334
New Contributors
- @agarcher made their first contribution in #318
- @gjtorikian made their first contribution in #334
Full Changelog: 1.5.1...1.6.0
1.5.1
RubyLLM 1.5.1: Model Registry Validation and Fixes 🛠️
A quick patch release introducing schema validation for our model registry and fixing configuration issues.
📋 Model Schema Validation
To prevent future model configuration errors, we've introduced models_schema.json
that validates our model registry. This ensures consistency across all provider model definitions and caught the issues fixed in this release.
🐛 Bug Fixes
Model Capabilities Format
Fixed incorrect capabilities format for Mistral models that was caught by our new schema validation. The capabilities field was incorrectly set as a Hash instead of an Array:
# Before: capabilities: { "chat" => true }
# After: capabilities: ["chat"]
Image Generation Models
Google's Imagen models disappeared from Parsera (our model data source), causing them to lose their proper output modality. We've fixed this by explicitly setting their output modality to image
.
🔧 Infrastructure Updates
- JRuby CI: Updated to JRuby 10.0.1.0 for better compatibility
- Appraisal: Automated generation of Appraisal gemfiles for cleaner multi-Rails version testing
Installation
gem 'ruby_llm', '1.5.1'
This is a patch release with full backward compatibility. If you're using Mistral models or Google's Imagen, this update is recommended.
Full Changelog: 1.5.0...1.5.1
1.5.0
RubyLLM 1.5.0: Mistral, Perplexity, and Generator Fixes 🚀
Two new providers join the family, bringing 68 new models and specialized capabilities. Plus critical bug fixes for Rails users.
🌟 New Providers
Mistral AI
Full support for Mistral's model lineup - 63 models including their latest releases:
RubyLLM.configure do |config|
config.mistral_api_key = ENV['MISTRAL_API_KEY']
end
# Chat with their flagship model
chat = RubyLLM.chat(model: 'mistral-large-latest')
# Use the efficient Ministral models
small_chat = RubyLLM.chat(model: 'ministral-8b-latest')
# Vision with Pixtral
vision = RubyLLM.chat(model: 'pixtral-12b-latest')
vision.ask("What's in this image?", with: "path/to/image.jpg")
# Embeddings
embed = RubyLLM.embed("Ruby metaprogramming", model: 'mistral-embed')
Streaming, tools, and all standard RubyLLM features work seamlessly.
Perplexity
Real-time web search meets LLMs with Perplexity's Sonar models:
RubyLLM.configure do |config|
config.perplexity_api_key = ENV['PERPLEXITY_API_KEY']
end
# Get current information with web search
chat = RubyLLM.chat(model: 'sonar-pro')
response = chat.ask("What are the latest Ruby 3.4 features?")
# Searches the web and returns current information
# Deep research mode for comprehensive answers
researcher = RubyLLM.chat(model: 'sonar-deep-research')
answer = researcher.ask("Compare Ruby's fiber scheduler implementations")
Choose from Sonar, Sonar Pro, Sonar Reasoning, and Deep Research models based on your needs.
🐛 Critical Rails Generator Fixes
Migration Order
The Rails generator now creates migrations in the correct dependency order:
- Chats
- Messages (depends on Chats)
- Tool Calls (depends on Messages)
No more foreign key errors during rails db:migrate
.
PostgreSQL Detection
Fixed namespace collision with RubyLLM::ActiveRecord
that prevented proper database detection. PostgreSQL users now correctly get jsonb
columns instead of json
.
📚 Documentation & Quality
- Rails guide enhanced with instant message display patterns using Action Cable
- Comprehensive feature list added to README
- Available models guide promoted to top-level navigation
- Model pricing and capabilities updated across all providers
- Fixed broken links after documentation reorganization
Installation
gem 'ruby_llm', '1.5.0'
Full backward compatibility maintained. Your existing code continues to work.
Full Changelog: 1.4.0...1.5.0