A sophisticated Gradio-based application that automatically discovers, categorizes, and launches AI/ML projects with intelligent AI-powered launch command generation. Features comprehensive project analysis using Qwen models, automatic environment detection, persistent caching, background scanning, and one-click project launching.
- π§ AI-Powered Launch Commands: Uses Qwen models to intelligently analyze projects and generate optimal launch commands
- π Automatic Project Discovery: Scans specified directories for AI/ML projects with smart detection
- π Environment Detection: Automatically detects Python environments (conda, venv, poetry, pipenv)
- π€ Dual AI Analysis: Uses both Ollama (Granite) and Qwen models for comprehensive project understanding
- π¨ Visual Project Icons: Generates unique colored icons for each project
- π One-Click Launching: Launches projects in their proper environments with AI-generated commands
- π± Modern Web Interface: Beautiful, responsive Gradio interface with multiple views
- πΎ Persistent Storage: SQLite database stores all project metadata, launch commands, and analysis
- π Background Scanning: Continuous monitoring for new projects and changes
- β‘ Incremental Updates: Only processes new or changed projects for efficiency
- π·οΈ Smart Caching: Preserves AI-generated descriptions and launch commands
- π οΈ Custom Launcher Creation: Automatically creates editable custom launcher scripts
- π API Server: RESTful API for external integrations
- π Database Management UI: Comprehensive database viewer and management interface
- Intelligent Analysis: Qwen models analyze project structure, dependencies, and documentation
- Multi-Option Analysis: Provides primary launch method with alternatives when available
- Custom Launcher Scripts: Automatically creates editable bash scripts for complex projects
- Confidence Scoring: AI provides confidence levels for launch command recommendations
- Fallback Systems: Multiple fallback mechanisms ensure every project gets a launch method
- User Override: Easy custom launcher editing with template generation
- Tabbed Interface: Integrated App List, Database Management, and Settings in one application
- API Integration: Full RESTful API for programmatic access
- Database UI: Comprehensive project database management and visualization
- Command Line Options: Flexible startup configuration with verbose logging
- Custom Launchers Directory: User-editable launcher scripts in
custom_launchers/ - Force Re-analysis: Re-run AI analysis for improved launch commands
- Project Status Tracking: Comprehensive status and health monitoring
- Launch History: Track successful launches and common patterns
The application uses a modular architecture with clear separation of concerns:
launcher.py- Main unified launcher with database integration, AI features, and tabbed interfaceqwen_launch_analyzer.py- AI-powered launch command generation using Qwen modelsdatabase_ui.py- Comprehensive database management interfacelaunch_api_server.py- RESTful API server for external integrationssettings_ui.py- Settings configuration interfaceproject_database.py- SQLite database management and operationsbackground_scanner.py- Background scanning and incremental updateslauncher_ui.py- Core UI components and project management logicproject_scanner.py- Project discovery and identificationenvironment_detector.py- Python environment detectionollama_summarizer.py- AI-powered project descriptions (Granite models)icon_generator.py- Visual icon generationlogger.py- Comprehensive logging systemlaunch.py- Launch utilities and fallback methods
- Python 3.8+
- Ollama installed and running (for project descriptions)
- Ollama with Qwen models (for launch command generation):
qwen3:8b(primary, fast analysis)qwen3:14b(advanced analysis for complex projects)
- Granite models (for descriptions):
granite3.1-dense:8bgranite-code:8b
- Linux environment with
gnome-terminal(for launching)
gradio>=5.38.1
Pillow>=9.0.0
pathlib
pandas>=1.3.0
-
Clone and Setup
git clone <repository-url> cd Launcher pip install -r requirements.txt
-
Install Required AI Models
# Qwen models for launch command generation ollama pull qwen3:8b ollama pull qwen3:14b # Granite models for project descriptions ollama pull granite3.1-dense:8b ollama pull granite-code:8b
-
Create Configuration Create a
config.jsonfile (ignored by git for privacy):{ "index_directories": [ "/path/to/your/ai/projects", "/another/project/directory" ] } -
Launch the Application
./launcher.sh
Or run directly with options:
python3 launcher.py # Default: Unified interface on port 7870 python3 launcher.py --port 8080 # Custom port python3 launcher.py --verbose # Verbose logging python3 launcher.py --no-api # Disable API server
-
Access the Interface
- Main Interface: http://localhost:7870 (App List, Database, Settings tabs)
- API Server: http://localhost:7871 (if enabled)
- Configuration: Configure directories in the Settings tab if not done during setup
- Initial Discovery: Comprehensive scan of your configured directories
- Database Creation: Creates
projects.dbwith all discovered projects - AI Analysis: Both Qwen and Granite models analyze projects in the background
- Custom Launcher Generation: AI creates custom launcher scripts for each project
- Immediate Use: Projects are immediately launchable with AI-generated commands
The unified interface provides three main tabs:
- π± App List: Browse and launch your projects with search and filtering
- ποΈ Database: Comprehensive database management and inspection tools
- βοΈ Settings: Configure directories, manage settings, and view system status
The system uses Qwen models to intelligently analyze each project:
- Structure Analysis: Examines files, dependencies, and project layout
- Documentation Review: Reads README files and comments for launch hints
- Pattern Recognition: Identifies common project types and frameworks
- Command Generation: Creates optimal launch commands with confidence scores
- Custom Launcher Creation: Generates editable bash scripts in
custom_launchers/ - Alternative Options: Provides backup launch methods when multiple options exist
Every project gets a custom launcher script:
- Location:
custom_launchers/<project-name>.sh - Editable: Fully customizable by users
- AI-Generated: Initially populated with AI analysis
- Environment Variables: Includes guidance for env var configuration
- Executable: Ready to run with proper permissions
- Force Re-analysis: Trigger new AI analysis for improved launch commands
- Mark as Dirty: Queue projects for re-processing
- Manual Scanning: Immediate directory scans for new projects
- Database Management: Full CRUD operations via database UI
Create config.json in the project root:
{
"index_directories": [
"/media/user/AI/projects",
"/home/user/git/ai-projects",
"/opt/ml-projects"
]
}- Location:
projects.db(SQLite) - Auto-creation: Database created on first run
- Schema: Projects, scan sessions, launch analytics, and metadata
- Backup: Recommended to backup
projects.dbperiodically
The system automatically uses available models:
- Qwen Models: Primary for launch command generation
- Granite Models: Secondary for project descriptions
- Fallback: Heuristic analysis if AI models unavailable
- Quick Scan: Every 3 minutes (new projects)
- Full Scan: Every 60 minutes (comprehensive verification)
- AI Re-analysis: Every 24 hours (refresh launch commands)
- Manual Triggers: Available via UI buttons
The integrated API server runs on port 7871 by default:
# Example API calls
curl http://localhost:7871/api/projects # List all projects
curl http://localhost:7871/api/projects/scan # Trigger scan
curl -X POST http://localhost:7871/api/projects/launch \
-H "Content-Type: application/json" \
-d '{"project_path": "/path/to/project"}' # Launch project# View all projects
sqlite3 projects.db "SELECT name, launch_command, launch_confidence FROM projects;"
# Force re-analysis
sqlite3 projects.db "UPDATE projects SET dirty_flag = 1;"
# View launch analytics
sqlite3 projects.db "SELECT * FROM scan_sessions ORDER BY start_time DESC LIMIT 10;"
# Custom launcher usage
sqlite3 projects.db "SELECT name, launch_type FROM projects WHERE launch_type = 'custom_launcher';"# List all custom launchers
ls custom_launchers/
# Edit a custom launcher
nano custom_launchers/my-project.sh
# Test a custom launcher
./custom_launchers/my-project.sh
# Force regeneration of custom launcher
# (Use "Force Re-analyze" button in UI)- Application Log:
logs/ai_launcher.log - Ollama Transactions:
logs/ollama_transactions.log - API Access Log:
logs/api_access.log - Launch History: Tracked in database
# Test Qwen launch analyzer (if available)
python3 -c "from qwen_launch_analyzer import QwenLaunchAnalyzer; print('OK')"
# Test database operations
python3 -c "from project_database import db; print(db.get_stats())"
# Test project scanning
python3 -c "from project_scanner import ProjectScanner; print('OK')"python3 icon_generator.py# View database structure
sqlite3 projects.db ".schema"
# Check project count
sqlite3 projects.db "SELECT COUNT(*) FROM projects;"
# View recent AI analysis
sqlite3 projects.db "SELECT name, launch_analysis_method, launch_confidence FROM projects WHERE launch_analyzed_at > strftime('%s', 'now', '-1 day');"Launcher/
βββ launcher.py # Main unified launcher (App List + Database + Settings)
βββ qwen_launch_analyzer.py # AI launch command generation
βββ database_ui.py # Database management interface
βββ launch_api_server.py # RESTful API server
βββ settings_ui.py # Settings configuration interface
βββ project_database.py # Database management and operations
βββ background_scanner.py # Background scanning and updates
βββ launcher_ui.py # Core UI components
βββ project_scanner.py # Project discovery logic
βββ environment_detector.py # Environment detection
βββ ollama_summarizer.py # AI project descriptions
βββ icon_generator.py # Icon generation
βββ logger.py # Comprehensive logging
βββ launch.py # Launch utilities
βββ launcher.sh # Launch script
βββ config.json # Configuration (git-ignored)
βββ requirements.txt # Python dependencies
βββ projects.db # SQLite database (created at runtime)
βββ custom_launchers/ # Custom launcher scripts (git-ignored)
β βββ project1.sh
β βββ project2.sh
β βββ ...
βββ logs/ # Log files (git-ignored)
β βββ ai_launcher.log
β βββ ollama_transactions.log
β βββ api_access.log
βββ README.md # This file
Each project receives intelligent analysis:
- Multi-Model Analysis: Qwen models analyze structure and documentation
- Confidence Scoring: AI provides 0.0-1.0 confidence ratings
- Alternative Methods: Multiple launch options when available
- Custom Script Generation: Editable bash scripts for complex setups
- Environment Integration: Automatic environment activation
- Pattern Recognition: Recognizes common frameworks and tools
Each project displays:
- Unique Icon: Generated with consistent colors
- Environment Badge: Shows detected Python environment
- Status Indicators: Up-to-date status, git repository, AI confidence
- Launch Information: AI-generated command and confidence level
- Custom Launcher: Indicates if custom script is available
- Last Analysis: Timestamp of AI analysis
The launcher automatically:
- Detects Environment: Identifies conda, venv, poetry, pipenv
- Activates Environment: Sets up environment before launching
- Finds Entry Points: Identifies main scripts and executables
- Handles Complex Projects: Supports nested structures and frameworks
- Creates Custom Scripts: Generates bash scripts for complex setups
The unified launcher provides a single interface with multiple tabs:
| Interface Component | Port | Description |
|---|---|---|
| Main Interface | 7870 | Unified tabbed interface (App List + Database + Settings) |
| API Server | 7871 | RESTful API for external integrations |
| Feature | Simple Launch (launch.py) |
Unified Launcher (launcher.py) |
|---|---|---|
| AI Launch Commands | Fallback only | Qwen-powered |
| Project Discovery | Manual scan | Background + Manual |
| Database Storage | None | SQLite with history |
| Custom Launchers | None | Auto-generated |
| API Server | None | Integrated |
| Multiple Interfaces | No | Tabbed (App List + Database + Settings) |
| Session Persistence | None | Full persistence |
| Port | 7860 | 7870-7871 |
- AI Models Not Available: Install Qwen and Granite models via Ollama
- Custom Launchers Not Working: Check permissions:
chmod +x custom_launchers/*.sh - Database Errors: Delete
projects.dbto reset (requires full rescan) - Background Scanner Issues: Check logs for threading or permission errors
- Launch Command Failures: Edit custom launcher scripts in
custom_launchers/
- Low Confidence: Try "Force Re-analyze" or edit custom launcher script
- Wrong Commands: Edit the generated script in
custom_launchers/ - Missing Models: Ensure Qwen models are installed:
ollama list - Timeout Errors: Check Ollama service:
ollama serve
- Large Directories: Consider excluding non-project subdirectories
- AI Performance: Ensure sufficient RAM for Qwen models (8GB+ recommended)
- Database Size: Regularly cleanup old scan sessions
- Custom Launchers: Use custom scripts for consistently problematic projects
Enable detailed logging:
# Verbose logging
python3 launcher.py --verbose
# View real-time logs
tail -f logs/ai_launcher.log
# View AI model interactions
tail -f logs/ollama_transactions.log
# Database inspection
sqlite3 projects.db ".tables"- Follow the cursor rules for project structure
- Keep UI components in feature modules (
*_ui.py) - Maintain separation between core logic and UI
- Add comprehensive logging for new features
- Test AI integrations with fallback mechanisms
- Update database schema carefully (consider migrations)
- Document new API endpoints
This project is open source. See LICENSE file for details.
- Built with Gradio for responsive web interfaces
- Uses Ollama for local AI inference
- Powered by Qwen models for intelligent launch command generation
- Enhanced with IBM Granite models for project descriptions
- SQLite for efficient local storage and persistence
- Threading and async patterns for responsive background processing