Tags: lpalbou/AbstractCore
Tags
Release 2.2.3: Fix [all] Extra for Complete Installation 🔧 Critical Installation Fix: Fixed abstractcore[all] to truly install ALL modules including development dependencies. 🎯 What Changed: • [all] extra now includes 12 dependency groups (was missing dev, test, docs) • Complete coverage: All providers + features + development tools • Users can now confidently use pip install abstractcore[all] for everything 📦 Complete Dependencies Now Included: • All Providers (6): openai, anthropic, ollama, lmstudio, huggingface, mlx • All Features (3): embeddings, processing, server • All Development (3): dev (pytest, black, mypy, ruff), test (pytest-cov, responses), docs (mkdocs) ✅ Verification: • All 12 referenced extras exist and are properly defined • No circular dependencies or missing references • Comprehensive configuration tested and verified 🚀 Impact: • No more partial installations or missing development tools • Single command installs complete AbstractCore environment • Development-ready installation with testing and documentation tools
Release 2.2.2: LLM-as-a-Judge Production Implementation 🎯 Major Addition: BasicJudge - Production-ready objective evaluation • Structured assessments with 9 evaluation criteria • Multiple file support with global assessment synthesis • Enhanced assessment structure with judge summary and source reference • CLI with simple 'judge' command (console script entry point) • Comprehensive documentation and real-world examples 🚀 Ready for Production: • Context overflow prevention and error handling • Chain-of-thought reasoning with consistent scoring (temp 0.1) • Pydantic integration with FeedbackRetry validation • Full backward compatibility maintained • Complete documentation in docs/basic-judge.md 📊 Assessment Features: • 1-5 scoring scale with clear definitions • Global assessment appears first for multi-file evaluations • Optional --exclude-global flag for original behavior • Custom criteria support and reference-based evaluation • JSON, plain text, and YAML output formats 🔧 Technical Excellence: • Sequential file processing to avoid context overflow • Graceful fallbacks and comprehensive error handling • Production-grade architecture following AbstractCore patterns • Extensive testing and documentation coverage
Version 2.1.0: Add vector embeddings support 🎯 Features: - Vector embeddings with SOTA models (EmbeddingGemma, Granite, etc.) - Smart caching and ONNX optimization - Semantic search and RAG capabilities - Event system integration - Production-ready performance 🔧 Technical: - Complete embeddings module in abstractllm/embeddings/ - 16 comprehensive tests with real models - Comprehensive documentation and examples - Zero breaking changes 🚀 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>