Build generative agent simulations.
Miniverse is a Python-first simulation library inspired by Stanford's Generative Agents study. The goal is to give developers everything needed to build simulations with comparable fidelity: hard deterministic physics, partial observability, structured memories, and optional LLM cognition. It focuses on agent logic and state management rather than rendering.
You write the physics (SimulationRules), decide how much cognition stays deterministic versus LLM-driven, and the orchestrator composes those modules so you can run reproducible baselines or improvised experiments.
Alpha: APIs are still moving. Every breaking change is logged in CHANGELOG.md with context.
# clone the repository
git clone https://github.com/miniverse-ai/miniverse.git
cd miniverse
# install dependencies into a local uv environment
uv syncRun examples with uv run .... A PyPI package is not published yet.
Each example folder ships a README with prompts, flags, and debugging tips.
01_hello_world- Single deterministic agent, minimalSimulationRules. Run:uv run python -m examples.workshop.01_hello_world.run02_deterministic- Multiple agents with threshold logic and resource coupling.03_llm_single- Swaps inLLMExecutor; requiresLLM_PROVIDER,LLM_MODEL, and a provider API key.04_team_chat- Natural-language coordination via thecommunicationfield; memories capture transcripts.05_stochastic- Adds randomness to physics while LLM cognition adapts.monte_carlo.py- Batch runner executing many trials with different seeds, printing statistics (mean backlog, clearance rate, worst case).
examples/workshop/run.py ties these ideas together: deterministic baseline by default, --llm flag, cadence controls, optional world-engine calls, and per-tick analysis hooks.
- Tier-2 grid world with deterministic movement and ASCII perception windows.
- Demonstrates how
customize_perception()can inject readable summaries while keeping structured data intact.
Run: uv run python -m examples.snake.run --ticks 40
- Recreates the Generative Agents Valentine's scenario with planning, execution, reflection, and memory streams.
- Debug flags (
DEBUG_LLM,DEBUG_MEMORY,MINIVERSE_VERBOSE) surface prompts, memories, and world updates.
Run:
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-5-nano
export OPENAI_API_KEY=sk-your-key
uv run python examples/smallville/valentines_party.pyThe folder includes a notebook and notes detailing the replication.
SimulationRules.apply_tick()updates deterministic physics (resource drains, stochastic events).build_agent_perception()enforces partial observability (own status, shared dashboards, broadcasts, direct messages, optional grid window).- Executors (deterministic or
LLMExecutor) return anAgentActionper agent. SimulationRules.process_actions()can resolve the world; otherwise the world-engine LLM processes the actions.- Persistence stores the new
WorldStateand actions (InMemoryPersistence,JsonPersistence, orPostgresPersistence). MemoryStrategyrecords observations so agents have context on the next tick.
Hooks such as customize_perception(), should_stop(), cadence utilities, and tick listeners let you adjust behavior without editing the orchestrator.
from pathlib import Path
from miniverse import Orchestrator, SimulationRules, ScenarioLoader, build_default_cognition
from miniverse.schemas import AgentAction, WorldState
loader = ScenarioLoader(scenarios_dir=Path("examples/scenarios"))
world_state, profiles = loader.load("workshop_baseline")
agents = {profile.agent_id: profile for profile in profiles}
class WorkshopRules(SimulationRules):
def apply_tick(self, state: WorldState, tick: int) -> WorldState:
updated = state.model_copy(deep=True)
backlog = updated.resources.get_metric("task_backlog", default=6, label="Task Backlog")
backlog.value = max(0, int(backlog.value) - 1)
return updated
def validate_action(self, action: AgentAction, state: WorldState) -> bool:
return True
world_prompt = "You are the world engine. Apply validated actions deterministically."
agent_prompts = {
agent_id: f"You are {profile.name}, a {profile.role}. Return an AgentAction JSON."
for agent_id, profile in agents.items()
}
cognition = {agent_id: build_default_cognition() for agent_id in agents}
orchestrator = Orchestrator(
world_state=world_state,
agents=agents,
world_prompt=world_prompt,
agent_prompts=agent_prompts,
simulation_rules=WorkshopRules(),
agent_cognition=cognition,
)
result = await orchestrator.run(num_ticks=10)Swap agents to LLMExecutor, add planners (LLMPlanner or deterministic alternatives), and wire reflection engines when you need higher-fidelity cognition.
- Orchestrator (
miniverse/orchestrator.py) - tick loop, prompt preflight, persistence/memory integration, cadence handling. - Simulation rules (
miniverse/simulation_rules.py) - deterministic physics, validation, optional deterministicprocess_actions(), lifecycle hooks. - Perception (
miniverse/perception.py) - Stanford-style partial observability with grid visibility and ASCII helpers. - Cognition (
miniverse/cognition/) - planners, executors, reflection engines, prompt rendering, cadence utilities, scratchpads. - Memory (
miniverse/memory.py) - FIFO default; extend for weighted or semantic retrieval. - Persistence (
miniverse/persistence.py) - async backends for state, actions, memories. - Environment helpers (
miniverse/environment/) - graph occupancy, BFS pathfinding, grid move validation, visibility rendering. - LLM utilities (
miniverse/llm_calls.py,miniverse/llm_utils.py) - structured calls with schema validation and retry feedback. - Scenario loader (
miniverse/scenario.py) - JSON-to-world-state converter.
DEBUG_LLM=1prints prompts and responses.DEBUG_MEMORY=1logs memory writes and retrievals.- Tick listeners (
tick_listenersargument onOrchestrator) let you stream metrics or run custom analysis. - The Monte Carlo script shows how to batch runs with different seeds and summarize outcomes.
docs/USAGE.md- scenario authoring, cognition wiring, cadence decisions.docs/PROMPTS.md- renderer placeholders, action catalog formatting, template conventions.docs/architecture/- deep dives on cognition flow, environment tiers, persistence design.docs/RESEARCH.md- research notes referencing Stanford Generative Agents, Smallville-inspired studies, branching narrative systems, structured simulation-state management, and other sources catalogued indocs/research/.ISSUES.md- active investigations and roadmap items.
- Run
uv run pytestbefore opening a PR. - Keep changes scoped; include transcripts or logs when demonstrating new agent behavior.
- We welcome new scenarios, physics modules, cognition strategies, and tooling improvements.
- Creator: Kenneth / @local0ptimist
- Co-conspirators: GPT-5 Codex, Claude Code, and everyone building stranger simulations than we predicted.
- Research inspirations: detailed commentary lives in
docs/RESEARCH.mdanddocs/research/.
MIT. Fork responsibly.