OJU is a lightweight, extensible framework for building and managing AI agents powered by various large language models (LLMs) including OpenAI, Anthropic, and Google's Gemini.
- Multi-Provider Support: Seamlessly switch between different LLM providers (OpenAI, Anthropic, Gemini)
- Structured Prompts: Organize and manage your prompts in a clean, file-based system
- Simple API: Easy-to-use interface for interacting with different LLM providers
- Extensible: Add support for new LLM providers with minimal code
- Type Hints: Full type annotations for better development experience
- Comprehensive Testing: Thorough test coverage for reliable operation
- Documentation: Extensive documentation with examples
- Custom System Prompts: Override default prompts on the fly
- Error Handling: Robust error handling with meaningful messages
Install OJU using pip:
pip install ojuFor development:
git clone https://github.com/ojasaklechat41/oju.git
cd oju
pip install -e ".[dev]"- Python 3.10 or higher
- API keys for your preferred LLM providers
Install OJU using pip:
pip install ojuSet your API keys as environment variables or in a .env file:
# .env file
OPENAI_API_KEY='your-openai-key'
ANTHROPIC_API_KEY='your-anthropic-key'
GOOGLE_API_KEY='your-google-key'from oju.agent import Agent
# Initialize the agent with OpenAI
response = Agent(
agent_name="agent_name",
model="gpt-4-turbo",
provider="openai",
api_key="your-api-key", # or use environment variables
prompt_input="What's the weather like today?",
custom_system_prompt="You are a helpful weather assistant." # Optional override
)
print(response)import os
from oju.agent import Agent
# Using environment variables
response = Agent(
agent_name="agent_name",
model="claude-3-opus-20240229",
api_key=os.getenv("ANTHROPIC_API_KEY"),
provider="claude",
prompt_input="Tell me a joke about programming"
)
print(response)Why do programmers prefer dark mode?
Because light attracts bugs! π
| Provider | Supported Models |
|---|---|
| OpenAI | gpt-4, gpt-3.5-turbo, etc. |
| Anthropic | claude-3-opus-20240229, claude-3-sonnet-20240229, etc. |
| Google Gemini | gemini-pro, gemini-1.5-pro |
For better security and flexibility, manage your configuration using environment variables or a .env file:
# .env file
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
GOOGLE_API_KEY=your_google_key_hereHandle different types of errors gracefully:
from oju.agent import Agent
try:
response = Agent(
agent_name="expert",
model="gpt-4",
provider="openai",
api_key="invalid_key",
prompt_input="Your question here"
)
print(response)
except ValueError as e:
print(f"Validation error: {e}")
except Exception as e:
print(f"An error occurred: {e}")Run multiple agents in parallel for complex workflows:
from concurrent.futures import ThreadPoolExecutor
from oju.agent import Agent
def run_agent(config):
return Agent(**config)
# Define agent configurations
agent_configs = [
{
'agent_name': 'researcher',
'model': 'gpt-4',
'provider': 'openai',
'prompt_input': 'Research the latest AI trends',
'temperature': 0.7
},
{
'agent_name': 'summarizer',
'model': 'claude-3-opus-20240229',
'provider': 'claude',
'prompt_input': 'Summarize this research',
'temperature': 0.3
},
{
'agent_name': 'critic',
'model': 'gemini-pro',
'provider': 'gemini',
'prompt_input': 'Provide constructive criticism',
'temperature': 0.5
}
]
# Run agents in parallel
with ThreadPoolExecutor() as executor:
results = list(executor.map(run_agent, agent_configs))
# Process results
for i, result in enumerate(results):
print(f"Agent {i+1} output:")
print(result)
print("-" * 50)Switch between different LLM providers with ease:
# Using Anthropic Claude
claude_response = Agent(
agent_name="expert",
model="claude-3-opus-20240229",
provider="claude",
api_key=os.getenv("ANTHROPIC_API_KEY"),
prompt_input="Explain quantum computing in simple terms"
)
# Using Google Gemini
gemini_response = Agent(
agent_name="expert",
model="gemini-pro",
provider="gemini",
api_key=os.getenv("GOOGLE_API_KEY"),
prompt_input="What are the latest trends in AI?"
)For better performance with large prompts or multiple requests:
from concurrent.futures import ThreadPoolExecutor
from oju.agent import Agent
def process_query(query):
return Agent(
agent_name="expert",
model="gpt-4",
provider="openai",
api_key=os.getenv("OPENAI_API_KEY"),
prompt_input=query
)
queries = ["Query 1", "Query 2", "Query 3"]
with ThreadPoolExecutor(max_workers=3) as executor:
results = list(executor.map(process_query, queries))
for result in results:
print(result)We welcome contributions! If you're interested in contributing to OJU, please read our Contributing Guidelines.
- Python 3.10 or higher
- Poetry (recommended) or pip
- Git
-
Fork and clone the repository:
git clone https://github.com/your-username/oju.git cd oju -
Set up a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
# Using pip pip install -e ".[dev]" # Or using Poetry poetry install
Run the test suite:
# Run all tests
pytest
# Run with coverage report
pytest --cov=oju --cov-report=term-missing
# Run a specific test file
pytest tests/test_agent.py -v
# Run tests with detailed output
pytest -vWe use several tools to maintain code quality:
git clone https://github.com/ojasaklechat41/oju.git
cd oju
pip install -e ".[dev]"Build the documentation locally:
cd docs
make html
open _build/html/index.html # Open in browserFor test coverage:
pytest --cov=oju- Create a feature branch:
git checkout -b feature/your-feature - Make your changes and commit:
git commit -m "Add your feature" - Push to the branch:
git push origin feature/your-feature - Open a Pull Request
- Update the version in
pyproject.toml - Update
CHANGELOG.mdwith the new version - Create a new release on GitHub
- Publish to PyPI:
rm -rf dist/* python -m build twine upload dist/*
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
For detailed information on how to contribute, please see our Contributing Guide.
- Report bugs by opening a new issue
- Suggest new features or enhancements
- Submit pull requests for bug fixes or new features
- Improve documentation
- Add test cases
- Spread the word about OJU
Ojas Aklecha
- Twitter: @ojasaklecha
- Email: [email protected]
- GitHub: @ojasaklechayt
Project Link: https://github.com/ojasaklechat41/oju
- OpenAI for their amazing language models
- Anthropic for Claude models
- Google AI for Gemini models
- All the contributors who helped improve this project