An AI-powered music theory and guitar learning platform built with .NET 9, React, and cutting-edge AI technologies.
See AGENTS.md for complete setup and development guidelines.
# Setup development environment
pwsh Scripts/setup-dev-environment.ps1
# Start all services with Aspire dashboard
pwsh Scripts/start-all.ps1 -Dashboard
# Run tests
pwsh Scripts/run-all-tests.ps1- AGENTS.md - Repository guidelines, project structure, and development workflow
- AI Future Roadmap - Vision for AI-powered features and multimodal learning
- AI-Ready API Implementation - API design principles for AI agents
- AI Music Generation Services - Music generation and synthesis capabilities
- ChatGPT-LLMs for Music Generation - LLM integration for music theory
- Music Theory Engine - Comprehensive chord, scale, and progression analysis
- AI Chatbot - Interactive music theory assistant powered by GPT-4
- Semantic Search - Vector-based chord and scale discovery with MongoDB
- Real-Time Analysis - Hand pose detection and guitar technique coaching (planned)
- Voice Integration - Voice-enabled tutoring and commands (planned)
- Monadic APIs - Type-safe error handling with Option/Result/Try patterns
ga/
├── Apps/ # Runtime applications
│ ├── ga-server/GaApi/ # Main REST/GraphQL API
│ ├── GuitarAlchemistChatbot/ # AI chatbot service
│ └── ga-client/ # React frontend
├── Common/ # Core libraries
│ ├── GA.Business.Core/ # Business logic
│ ├── GA.MusicTheory.DSL/ # Music theory domain
│ └── GA.Data.MongoDB/ # Data access
├── Tests/ # Test suites
├── docs/ # Documentation
└── Scripts/ # Build and deployment scripts
Guitar Alchemist is evolving into a comprehensive AI-powered music learning platform. See our * AI Future Roadmap* for details on:
- Phase 2 (Next 3-6 months): Real-time multimodal intelligence with Vision Agents, OpenVoice v2, and SpeechBrain
- Phase 3 (6-12 months): Audio analysis, generative music AI, and collaborative jamming
- Phase 4 (12+ months): Tutorial generation and adaptive learning platform
- .NET 9 - Backend services and APIs
- React + TypeScript - Frontend UI
- MongoDB - Database with vector search
- Aspire - Cloud-native orchestration
- OpenAI GPT-4 - AI chatbot and semantic analysis
- Semantic Kernel - AI orchestration
- Python/FastAPI - AI microservices (pose detection, sound synthesis)
- Prerequisites: .NET 9 SDK, Node.js 20+, MongoDB, Docker
- Setup: Run
pwsh Scripts/setup-dev-environment.ps1 - Development: Run
pwsh Scripts/start-all.ps1 -Dashboard - Access:
- API: https://localhost:7001
- Aspire Dashboard: https://localhost:15001
- Chatbot: https://localhost:7002
# Run all tests
dotnet test AllProjects.sln
# Backend only
pwsh Scripts/run-all-tests.ps1 -BackendOnly
# Playwright UI tests
pwsh Scripts/run-all-tests.ps1 -PlaywrightOnlySee AGENTS.md for:
- Code style guidelines
- Commit conventions
- Testing requirements
- Pull request process
[Add license information here]
- Aspire Dashboard - Service monitoring
- Jaeger - Distributed tracing
- Mongo Express - Database UI