A multi-agent system for laboratory automation and research, specializing in automated research paper monitoring, device control, and AI-powered analysis for condensed matter physics labs.
# Clone the repository
git clone https://github.com/caidish/labAgent.git
cd labAgent
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
cp .env.example .envEdit .env file with your API keys:
# Required for ArXiv Daily Updates and MCP Tools
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-5
# Optional
GOOGLE_API_KEY=your_google_api_key_here
GEMINI_MODEL=gemini-2.0-flash-exp
DEBUG=false
LOG_LEVEL=INFO# Web Interface (Recommended)
streamlit run lab_agent/web/app.py
# Command Line Interface
python -m lab_agent.main- Automated Scraping: Fetches latest papers from arXiv cond-mat/new
- AI-Powered Scoring: GPT-4 rates paper relevance (1-3 priority levels)
- Smart Filtering: Focuses on 2D materials, graphene, TMDs, quantum devices
- Beautiful Reports: Generates HTML reports with priority sections
- Web Interface: Manual trigger, report viewing, and management
- No Database: Simple file-based storage
Usage: Navigate to "π ArXiv Daily" tab β Click "Generate Daily Report"
- MCP Server for Microscope Control: Nikon glovebox microscope automation
- Deep Learning Tool Integration: Automated flake analysis and scoring
- Multi-Agent Coordination: OpenAI and Google Gemini integration
- Real-time Communication: WebSocket support for agent coordination
labAgent/
βββ lab_agent/ # Main package
β βββ agents/ # Agent implementations
β β βββ base_agent.py # Abstract base class
β β βββ arxiv_daily_agent.py # ArXiv monitoring agent
β βββ tools/ # Agent capabilities
β β βββ arxiv_daily_scraper.py # Web scraping for ArXiv
β β βββ paper_scorer.py # GPT-4 paper scoring
β β βββ daily_report_generator.py # HTML/JSON reports
β β βββ web_scraper.py # General web scraping
β β βββ arxiv_parser.py # ArXiv API integration
β βββ utils/ # Utilities
β β βββ config.py # Environment configuration
β β βββ logger.py # Logging setup
β βββ config/ # Configuration files
β β βββ interestKeywords.txt # Research interest keywords
β β βββ promptArxivRecommender.txt # GPT scoring prompts
β βββ web/ # Streamlit interface
β βββ app.py # Main web application
βββ requirements.txt # Python dependencies
βββ setup.py # Package configuration
βββ .env.example # Environment template
βββ CLAUDE.md # Project context for Claude Code
βββ TESTING_GUIDE.md # Comprehensive testing protocol
βββ README.md # This file
Follow the comprehensive testing guide in TESTING_GUIDE.md:
# Quick test - verify system works
streamlit run lab_agent/web/app.py
# β Go to ArXiv Daily tab β Generate Daily Report
# Full testing protocol
cat TESTING_GUIDE.mdEdit lab_agent/config/interestKeywords.txt to customize paper filtering:
# 2D Materials & Graphene
2D materials
graphene
monolayer graphene
transition metal dichalcogenides
van der Waals heterostructures
# ... add your research areas
Modify lab_agent/config/promptArxivRecommender.txt for different evaluation criteria.
from lab_agent.agents.arxiv_daily_agent import ArxivDailyAgent
import asyncio
# Initialize agent
agent = ArxivDailyAgent({'reports_dir': './reports'})
await agent.initialize()
# Generate report
task = {
'type': 'generate_daily_report',
'url': 'https://arxiv.org/list/cond-mat/new'
}
result = await agent.process_task(task)
print(f"Generated report with {result['total_papers']} papers")- Start Application:
streamlit run lab_agent/web/app.py - Navigate: Go to "π ArXiv Daily" tab
- Generate: Click "π Generate Daily Report"
- View: Expand report section to see results
- Manage: Use "ποΈ Clear All Reports" to cleanup
pip install -e .
# Use console entry points
lab-agent # CLI version
lab-agent-web # Web version- Async-first: All agents built on asyncio
- Modular Design: Separate agents, tools, and utilities
- Configuration-driven: Environment-based settings
- Tool-based: Agents use composable tools for capabilities
"ArXiv agent not available"
- Check OpenAI API key in
.envfile - Verify API key is valid and has credits
"No papers found"
- Check internet connection
- Verify ArXiv website is accessible
Slow GPT scoring
- Normal for 20+ papers (includes rate limiting)
- Consider upgrading to higher tier OpenAI plan
Permission errors
- Ensure write permissions in project directory
- Check
./reports/directory can be created
Enable detailed logging:
# In .env file
DEBUG=true
LOG_LEVEL=DEBUG- β Milestone 0: ArXiv Daily Update System
- π§ Milestone 1: MCP Server for Nikon Microscope
- β Milestone 2: Deep Learning Tool Integration
- π Milestone 3: Multi-Agent SDK Wiring
- General scaling framework and best practices
- Advanced automation capabilities
MIT License - see LICENSE file for details
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
- Issues: GitHub Issues
- Documentation: See
CLAUDE.mdfor detailed project context - Testing: Follow
TESTING_GUIDE.mdfor validation
Built for condensed matter physics research labs π¬βοΈ