Safe Execution Framework for Autonomous Coding Agents
Stop AI agents from eating your infrastructure.
Note: This repository is currently a placeholder while the core SEFACA code is being developed in an incubator. The installation and usage instructions below represent the planned functionality.
curl -sSL https://sefaca.dev/install.sh | sh
sefaca run --context "[builder:ai:$USER@local(myapp:main)]" "python ai_agent.py"
- Execution Context:
[persona:agent:reviewer@env(repo:branch)]
- Complete tracking of every AI action - Resource Limits: CPU, memory, and process constraints
- Pattern Detection: Blocks dangerous operations before execution
- Audit Trail: Every action logged and traceable
SEFACA uses a structured context format to track and identify every AI action:
[persona:agent:reviewer@environment(repository:branch)]
- persona: The role or type of agent (e.g.,
builder
,reviewer
,tester
) - agent: The AI system identifier (e.g.,
ai
,gpt4
,claude
) - reviewer: The human or system reviewing the actions (e.g.,
$USER
,ci-bot
) - environment: The execution environment (e.g.,
local
,staging
,prod
) - repository: The code repository being worked on
- branch: The git branch being modified
Example: [builder:ai:jwalsh@local(myapp:feature-123)]
Documentation is being developed. See docs/README.md for planned content.
MIT
For more information, visit sefaca.dev