Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Tags: blueman82/ai-counsel

Tags

v1.3.2

Toggle v1.3.2's commit message
Modify .gitignore

• Tool: Edit
• Timestamp: 2025-10-22 19:37:10

v1.3.1

Toggle v1.3.1's commit message
fix: make all file paths absolute for portable MCP server usage

- Fix decision graph database paths to resolve from ai-counsel directory when relative
- Fix transcript directory paths to resolve from ai-counsel directory when relative
- Pass server directory through engine to transcript manager
- Enables server to work from any working directory via MCP client

Fixes user issue where counsel only worked when executed from ai-counsel directory

v1.3.0

Toggle v1.3.0's commit message
chore: remove development files from git tracking

Remove internal development files that should be excluded per .gitignore:
- CACHE_IMPLEMENTATION_SUMMARY.md
- PHASE_5_COMPLETION_SUMMARY.md
- TASK_20_SUMMARY.md
- decision_graph.db (user-generated, not shared)
- tests/e2e/convergence_test_results.md

Files remain locally but no longer clutter the repository.
Users will have a cleaner experience on git clone/pull.

🤖 Generated with Claude Code

v1.2.0

Toggle v1.2.0's commit message
docs: add real-world deliberation examples

Add comprehensive guide with 6 production deliberation examples:
- Event sourcing for audit trails
- Microservices vs monolith architecture decisions
- Framework selection (React vs Vue)
- Database indexing strategies
- Testing framework migration
- API rate limiting design

Includes performance metrics, model combination recommendations, and
cost/benefit analysis for each deliberation scenario. Supports community
discussions on use cases and best practices.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>

v1.1.0

Toggle v1.1.0's commit message
feat: retrospective announcement of local model support (v1.1.0)

Add comprehensive documentation and marketing for zero-cost, privacy-preserving
local model inference capabilities (Ollama, LM Studio, llamacpp).

Changes:
✓ Create /docs/http-adapters/ directory with 5 comprehensive guides:
  - intro.md: Value proposition, cost analysis, architecture
  - ollama-quickstart.md: 5-min setup guide
  - lmstudio-quickstart.md: GUI-based alternative
  - llamacpp-quickstart.md: Maximum performance setup
  - openrouter-guide.md: Cloud alternative for comparison

✓ Add demo_local_models.py: Working example showing local + cloud mixing
✓ Update README.md:
  - Add 2 feature bullets (💰 zero-cost, 🔐 privacy)
  - Add "Local Model Inference" subsection with cost comparison
  - Link to new documentation

✓ Enhance CHANGELOG.md v1.1.0 entry:
  - Expand from generic to comprehensive feature description
  - Highlight zero-cost and privacy benefits
  - Add cost analysis and configuration examples
  - Include performance metrics

✓ Update .gitignore:
  - Add exceptions for /docs/http-adapters/ (new user-facing docs)

Key Value Propositions Highlighted:
  • 67-97% cost savings vs cloud APIs
  • Complete data privacy (no external calls)
  • No rate limits or quota exhaustion
  • HIPAA/SOC2 compliance (self-hosted)
  • Hybrid deployment options for cost optimization

This retrospective announcement follows the same 6-layer strategy as the
Decision Graph Memory feature, ensuring consistent discoverability.

🤖 Generated with Claude Code