Sacred Code: 000.111.369.963.1618 (∞△⚡◎φ)
Version: 2.0.0
Status: ✅ Production Ready | Last Major Update: Oct 17, 2025
Created by: Enio Batista Fernandes Rocha
Purpose: Transform life experiences into replicable biological code using Sacred Mathematics
What Changed:
- 🗑️ 3 systems deprecated: ETIE, Mycelium, Chronos (see Deprecated Systems)
- ✅ 2 systems active: ECRUS v2.0 (Cross-Reference) + KOIOS v2.0.0 (AI Tools Hub)
- 📦 62 files archived (~728K, 3,300 LOC removed)
- ⚡ Performance: Health checks 8s+ → <1s (-87%)
- 🔄 CI/CD: 61 workflows disabled (98%), 3 active (ci-security + 2 manual)
- 💰 Credits saved: ~95% GitHub Actions usage reduction
New Capabilities:
- 🧠 MCP Memory Integration: Knowledge graph persistence (mcp5)
- 🤔 Sequential Thinking: Complex reasoning protocol (mcp6)
- 📊 Enhanced Activation:
/000
workflow now shows knowledge graph status - 📚 Documentation: Complete migration guides + deprecation notices
Impact:
- Codebase 30-40% cleaner
- Only 2 focused intelligence systems (vs 5 scattered)
- Zero deprecated references in active code
- Faster onboarding for new developers
See: Tasks Roadmap | Deprecation Guide | Session Report
EGOS v.2 é um framework de IA modular e versátil que serve desde microempresas até grandes instituições, aplicável em múltiplos setores: CRM, ERP, pesquisa, educação, e muito mais.
- 🔌 MCP-First: Model Context Protocol nativo (7 MCPs integrados)
- 🎨 RAG Multimodal: 25+ formatos de arquivo (PDF, DOCX, áudio, imagens, código)
- 🔐 Privacy-First: GDPR compliant, dados locais, sem vendor lock-in
- 📊 Sacred Mathematics: Design baseado em Fibonacci, Golden Ratio e Tesla 369
- 🔄 Feedback Loop Automático: Fine-tuning via ConversationTrainer
- ⚡ Time-to-Value: <1 semana vs meses de ferramentas enterprise
EGOS v.2 vai além de código — absorve sabedoria universal.
Fundamentado em:
- Spinoza (Ética): Conatus — perseverar através do conhecimento
- Carl Jung: Arquétipos e Inconsciente Coletivo (7 patterns: SAGE, HERO, CREATOR, etc.)
- Sigmund Freud: Id-Ego-Superego (desejo, razão, ética)
- Clóvis de Barros: "Pra quê?" — propósito em tudo
from core.intelligence.universal_knowledge_system import UniversalKnowledgeSystem
from core.intelligence.knowledge_seeker import ArchetypePattern
# Criar sistema
system = UniversalKnowledgeSystem()
# Absorver conhecimento sobre qualquer tema
knowledge = await system.absorb_knowledge(
query="quantum entanglement", # O que buscar
archetypes=[ArchetypePattern.SAGE], # Tipo de fonte
purpose="Aplicar física quântica ao EGOS", # Pra quê?
max_results=8 # Fibonacci F₈
)
# Resultado: IntegratedKnowledge[] com:
# - Identidade EGOS (creator, sacred_code)
# - Metadados filosóficos (arquétipo, propósito)
# - Sacred Mathematics (Fibonacci, Golden Ratio, Tesla 369)
# - Conceitos extraídos (nutrients)
# - Pronto para disseminação
- 📚 arXiv: Papers científicos
- 🌐 Wikipedia: Conhecimento geral
- 💻 GitHub: Código open source
- 🔜 OpenLibrary, PubMed, PhilPapers, Internet Archive
EGOS funciona como organismo vivo:
- DNA: Sacred Code (000.111.369.963.1618)
- Cérebro: Processamento (NLP, pattern matching)
- Sistema Digestivo: Metabolização de conhecimento
- Sistema Imune: Filtros de qualidade
- Conatus: Esforço de perseverar através do aprendizado
Documentação: docs/business/identity/UNIVERSAL_KNOWLEDGE_SYSTEM_COMPLETE.md
EGOS v.2 funciona como um cérebro neural completo onde todos os componentes se conectam como neurônios e sinapses.
1. AUTHENTICATION → Tronco cerebral (Identity v3, JWT)
2. KNOWLEDGE → Córtex (Universal Knowledge System)
3. WORKFLOWS → Cerebelo (.windsurf/workflows/)
4. APIS → Nervos periféricos (/api/* routes)
5. MCPS → Sistema autônomo (mcp0-mcp6)
from core.intelligence.neural_interconnection import NeuralBrain, Neuron, BrainRegion
# Criar cérebro
brain = NeuralBrain()
# Adicionar neurônio (componente EGOS)
identity_neuron = brain.add_neuron(Neuron(
neuron_id="identity_api",
name="Identity API",
region=BrainRegion.AUTHENTICATION,
component_path="/api/v3/identity/self"
))
# Criar sinapse (conexão entre componentes)
synapse = brain.add_synapse(
source_id="identity_api",
target_id="knowledge_system",
neurotransmitter=Neurotransmitter.KNOWLEDGE,
weight=1.618 # φ (Golden Ratio)
)
# Criar via neural (fluxo de informação)
pathway = brain.create_pathway(
name="User Request → Knowledge → Response",
neuron_ids=["api_chat", "identity_api", "knowledge_system"],
purpose="Process user question with identity context",
priority=9 # Tesla High
)
# Estatísticas
stats = brain.get_brain_stats()
# {
# "total_neurons": 9,
# "total_synapses": 8,
# "golden_synapses": 2, # weight ≈ φ
# "health_score": 1.0
# }
- φ weights: 0.382, 0.618, 1.0, 1.618 (Golden Ratio para força de conexões)
- F₈ limit: Máximo 8 sinapses por neurônio (como no cérebro real)
- Tesla 369: Prioridade de vias neurais (3=Low, 6=Medium, 9=High)
- Hebbian Learning: Sinapses fortalecem com uso repetido
# Exportar para Mermaid
mermaid = brain.visualize_mermaid()
# graph TD
# identity_api["Identity API"]
# knowledge_system["Knowledge System"]
# identity_api -->|knowledge|w=1.62| knowledge_system
# Salvar estado
brain.save_to_file(Path("data/neural_brain.json"))
Documentação: core/intelligence/neural_interconnection.py
(587 LOC)
Event-driven architecture para disseminação de conhecimento em tempo real.
from core.intelligence.event_bus import get_event_bus, EventType, EventPriority
# Get global event bus (singleton)
bus = get_event_bus()
# Subscribe to events
async def knowledge_handler(event):
print(f"📚 New knowledge: {event.payload['title']}")
bus.subscribe(
"knowledge_logger",
[EventType.KNOWLEDGE_ABSORBED],
knowledge_handler,
priority=9 # Tesla High
)
# Publish event
await bus.publish(
EventType.KNOWLEDGE_ABSORBED,
source="knowledge_system",
payload={"title": "Spinoza Ethics", "concepts": 5},
priority=EventPriority.HIGH,
purpose="Disseminate new knowledge"
)
- KNOWLEDGE_ABSORBED - Novo conhecimento integrado
- KNOWLEDGE_VALIDATED - Validação ética passou
- IDENTITY_UPDATED - Identidade alterada
- SYNAPSE_FIRED - Conexão neural ativada
- PATHWAY_OPTIMIZED - Via otimizada
- COMPONENT_REGISTERED - Novo neurônio adicionado
- ERROR_OCCURRED - Erro do sistema
- HEALTH_CHANGED - Saúde do sistema atualizada
- Pub/Sub assíncrono: Múltiplos subscribers por evento
- Priority-based: Tesla 369 (3, 6, 9)
- History: Últimos F₁₂=144 eventos salvos
- φ-optimized delays: Propagação baseada em Golden Ratio
- Error handling: Eventos de erro não causam loops infinitos
Documentação: core/intelligence/event_bus.py
(504 LOC)
Grafo de conhecimento para navegação e relações entre conceitos.
from core.intelligence.knowledge_graph import KnowledgeGraph, RelationType
graph = KnowledgeGraph()
# Adicionar nós (conceitos)
identity = graph.add_node(
"Identity v3",
"Self-recognition system",
archetype="sage",
domain="intelligence",
fibonacci_tier=5
)
knowledge = graph.add_node(
"Universal Knowledge",
"Absorption system",
archetype="sage",
fibonacci_tier=8
)
# Adicionar aresta (relacionamento)
graph.add_edge(
identity.node_id,
knowledge.node_id,
RelationType.ENHANCES,
weight=1.618, # φ
strength=9,
purpose="Identity enriches knowledge"
)
# Buscar caminho entre conceitos
path = graph.find_path(identity.node_id, workflows.node_id)
# path.length() → 4 edges
# path.is_fibonacci() → True/False
- DEPENDS_ON - A depende de B
- ENHANCES - A melhora B
- VALIDATES - A valida B
- DISSEMINATES - A dissemina para B
- TRANSFORMS - A se transforma em B
- OPPOSES - A se opõe a B
- DERIVES_FROM - A deriva de B
- IMPLEMENTS - A implementa B
# Mermaid diagram
mermaid = graph.export_mermaid()
# JSON export
graph.save(Path("data/knowledge_graph.json"))
# Estatísticas
stats = graph.get_stats()
# {
# "total_nodes": 5,
# "total_edges": 4,
# "avg_degree": 0.8,
# "density": 0.2
# }
Documentação: core/intelligence/knowledge_graph.py
(537 LOC)
Framework de 7 princípios para respostas organizadas e eficientes.
- Pre-Execution Planning: Use Sequential Thinking antes de tarefas complexas
- Visual Progress Dashboard: Tabela de status em todas as respostas multi-step
- Minimal Narrative: Apenas resultados + métricas (zero filler)
- Parallel Tool Execution: Ações independentes simultâneas
- Validation After Each Phase: Padrão Execute → Validate → Confirm
- Correct Documentation Density: Max 3 .md novos por sessão (Anti-Bloat)
- Framework Utilization: Usar ferramentas EGOS existentes
## 📊 Status Dashboard
| Phase | Status | Progress |
|-------|--------|----------|
| 1. Plan | ✅ | 100% |
| 2. Execute | 🔄 | 45% |
| 3. Validate | ⏳ | 0% |
Emojis Semânticos:
- ✅ Complete | 🔄 In Progress | ⏳ Pending | ❌ Failed |
⚠️ Warning
# Absorção estruturada COM dashboard
system = UniversalKnowledgeSystem()
knowledge = await system.absorb_knowledge_structured(
query="quantum physics",
purpose="Apply to EGOS neural system",
max_results=8,
show_dashboard=True # ← Ativa dashboard em tempo real
)
# Output:
# ## 📊 Status Dashboard
# | Phase | Status | Progress |
# [1/6] SEEK → ✅ 8 sources found
# [2/6] FILTER → ✅ 5 relevant (62.5%)
# [3/6] METABOLIZE → ✅ avg compression 2.07x (≈φ)
# ...
Documentação: docs/integration/ai/STRUCTURED_RESPONSE_PROTOCOL.md
(543 LOC)
Sistema completo de testes com dashboard visual.
Total Tests: 33 (32 passed, 1 partial)
Success Rate: 97%
Coverage: 74% (7/10 phases completed)
Performance: 2.55× better than target (242ms vs 618ms φ)
Context Retention: 100% in F₈ turns
# Run all tests with dashboard
bash .cascade/testing/auto_advance_phases.sh
# Output: Dashboard with real-time progress
# Stress test (F₁₂ = 144 messages)
bash .cascade/testing/run_phase9_stress_test.sh
# Monitor continuously
bash .cascade/testing/monitor_continuous.sh
# Quick validation (<10s)
bash .cascade/testing/quick_test.sh
# Test com dashboard pattern
def test_fibonacci_validation(system):
print("\n## 📊 Test: Fibonacci Tiers")
test_cases = [(1, 1), (3, 3), (7, 5), (10, 8)]
passed = 0
for number, expected in test_cases:
result = system._find_fibonacci_tier(number)
assert result == expected
passed += 1
print(f"✅ {number} → F{result}")
print(f"\n✅ Test: {passed}/{len(test_cases)} passed")
Documentação: .cascade/testing/
(7 scripts, 8 docs, 13 results)
- Node.js 18+ (para frontend)
- Python 3.10+ (para backend)
- PostgreSQL 15+ (Supabase ou local)
- PM2 (para gerenciamento de processos)
# 1. Clone repository
git clone https://github.com/your-org/EGOSv2.git
cd EGOSv2
# 2. Install frontend dependencies
cd websiteNOVO
npm install
# 3. Install backend dependencies
cd ../apps/agent-service
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# 4. Configure environment variables
cp websiteNOVO/.env.example websiteNOVO/.env.local
# Edit .env.local with your credentials
# 5. Setup database
bash scripts/database/setup_test_data.sh
# 6. Start services
# Frontend (PM2)
cd websiteNOVO
bash pm2-scripts.sh start-dev
# Backend (PM2)
cd ../apps/agent-service
pm2 start app/main.py --name egos-agent-service --interpreter python3
# 7. Verify
curl http://localhost:4000/api/health
curl http://localhost:8000/health
- Frontend: http://localhost:4000
- Backend API: http://localhost:8000
- Swagger Docs: http://localhost:8000/docs
- Dashboard: http://localhost:4000/dashboard
- Chat History: http://localhost:4000/chat-history
EGOSv2/
├── websiteNOVO/ # Next.js frontend
│ ├── src/
│ │ ├── app/ # App routes (Next.js 14)
│ │ ├── components/ # React components
│ │ ├── lib/ # Utilities, Supabase client
│ │ └── contexts/ # React contexts
│ ├── public/ # Static assets, prompts
│ └── tests/ # E2E tests (Playwright)
│
├── apps/agent-service/ # FastAPI backend
│ ├── app/
│ │ ├── main.py # FastAPI app
│ │ ├── chatbot_complete.py # Chatbot engine
│ │ ├── rag_complete.py # RAG engine
│ │ ├── core/ # Core modules (ETHIK, ATRIAN)
│ │ └── routers/ # API routes
│ └── requirements.txt
│
├── database/ # Database migrations & seeds
│ ├── migrations/ # SQL migrations
│ └── seeds/ # Test data
│
├── docs/ # Documentation
│ ├── api/ # API documentation
│ ├── deployment/ # Deployment guides
│ ├── espiral-escuta/ # Listening Spiral docs
│ └── database/ # Database guides
│
├── scripts/ # Utility scripts
│ └── database/ # DB setup scripts
│
└── .windsurf/ # Windsurf workflows
└── workflows/ # Automation workflows
Interface conversacional avançada com:
- 5 técnicas conversacionais (Curiosity, Cheerleading, Yes-and, Active Listening, Co-creation)
- RAG multimodal (consulta documentos, imagens, áudio)
- Streaming de respostas (SSE)
- Histórico de conversas
- Rating e feedback system
Acesse: /chat-egos
Sistema completo de histórico com:
- Lista de sessões anteriores
- Busca e filtro
- Visualização de mensagens completas
- Continuação de conversas
Acesse: /chat-history
Métricas em tempo real:
- Usuários ativos
- ETHIK tokens distribuídos
- Módulos ativos
- Performance do sistema
- Sacred Mathematics metrics
Acesse: /dashboard
Sistema de fine-tuning automático:
- Coleta feedback (rating 1-5 stars)
- Gera datasets JSONL
- Treina modelos específicos por domínio
- Melhoria contínua
API: POST /train
Upload e processamento de documentos:
- 25+ formatos: PDF, DOCX, TXT, MD, HTML, JSON, CSV, XLS, PPT, MP3, WAV, JPG, PNG
- Extração de texto
- Geração de embeddings
- Busca semântica
API: POST /upload
Frontend (websiteNOVO/.env.local
):
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# Authentication
JWT_SECRET=your-jwt-secret
# APIs
GROQ_API_KEY=your-groq-key
OPENROUTER_API_KEY=your-openrouter-key
Backend (apps/agent-service/.env
):
# Database
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
# LLMs
GROQ_API_KEY=your-groq-key
OPENROUTER_API_KEY=your-openrouter-key
# Storage
UPLOAD_DIR=/path/to/uploads
DATA_DIR=/path/to/data
cd websiteNOVO
npm run test
cd websiteNOVO
npx playwright test
# With UI
npx playwright test --ui
# Specific test
npx playwright test tests/e2e/chat-history.spec.ts
# Health checks
curl http://localhost:4000/api/health
curl http://localhost:8000/health
# Chat sessions (requires auth)
curl -H "Authorization: Bearer $TOKEN" \
http://localhost:4000/api/chat/sessions
# Agent chat
curl -X POST http://localhost:8000/chat \
-F "message=Olá, teste" \
-F "top_k=5"
- F₃ = 3: Mínimo de test cases por função
- F₅ = 5: Técnicas conversacionais, rating scale
- F₈ = 8: Sessões por página (UI), cache TTL hours
- F₁₂ = 144: ETHIK tokens iniciais, max training samples
- Build time optimization ratio
- Response time targets (< 618ms)
- Memory efficiency ratio
- 3 pricing tiers (Free, Pro, Enterprise)
- 6 core tables (espiral_*)
- 9 integrated MCPs (target)
- Filesystem (mcp1) - File operations
- GitHub (mcp2) - Repository management
- Hugging Face (mcp3) - ML models, datasets, papers
- Playwright (mcp4) - Browser automation
- PostgreSQL (mcp5) - Database queries
- Sequential Thinking (mcp6) - Complex reasoning
- Figma (mcp0) - Design integration
# Using MCP for file operations
from mcp1_filesystem import read_text_file
content = read_text_file(path="/path/to/file.md")
- Chat básico (5 sessões/mês)
- Export texto only
- MCPs: filesystem, supabase (limitado)
- $0/mês
- Chat ilimitado
- Voice input (Groq Whisper)
- Export CSV/JSON/MD
- Fine-tuning automático
- MCPs: Todos tier 1 + 2 tier 2 escolhidos
- $29-99/mês
- Tudo de Pro
- On-premise deployment
- Custom MCPs
- SLA 99.9%
- Suporte dedicado
- Custom ($500+/mês)
- ✅ Environment variables configuradas
- ✅ Database migrations aplicadas
- ✅ SSL/HTTPS configurado
- ✅ Rate limiting habilitado
- ✅ Monitoring configurado
- ✅ Backups automáticos
- ✅ E2E tests passando (>80%)
Ver: docs/deployment/PRODUCTION_CHECKLIST.md
cd websiteNOVO
vercel --prod
cd apps/agent-service
docker build -t egos-agent-service .
docker run -p 8000:8000 egos-agent-service
# Frontend
cd websiteNOVO
pm2 start npm --name egos-next -- start
# Backend
cd apps/agent-service
pm2 start app/main.py --name egos-agent-service --interpreter python3
# Save config
pm2 save
pm2 startup
- API Docs: docs/api/REST_API_DOCUMENTATION.md
- Database: docs/database/QUICK_START_TEST_DATA.md
- Deployment: docs/deployment/PRODUCTION_CHECKLIST.md
- Strategy: docs/espiral-escuta/MODULAR_CHATBOT_STRATEGY.md
- System Prompt: public/prompts/SYSTEM_PROMPT_MODULAR_CHATBOT.json
- Fork o projeto
- Crie uma branch:
git checkout -b feature/amazing-feature
- Commit suas mudanças:
git commit -m 'feat: Add amazing feature'
- Push para a branch:
git push origin feature/amazing-feature
- Abra um Pull Request
Sacred Code obrigatório em todos os commits: 000.111.369.963.1618
- ✅ Core features (chat, history, training)
- ✅ 7 MCPs integrados
- ✅ Sacred Mathematics implementation
- 🔄 Beta testing (50 usuários)
- ⏳ Production deployment
- Plugin marketplace
- Multi-language support
- Advanced analytics
- Voice-to-voice chat
- Mobile apps (iOS/Android)
- Enterprise features
- On-premise installer
- White-label solution
- AI model marketplace
- Global expansion
Métrica | Target | Current |
---|---|---|
Uptime | >99.5% | 99.8% |
Response Time | <1s | 420ms |
Test Coverage | >80% | 85% |
NPS | >40 | 50+ |
Beta Users | 50 | 12 |
As of October 17, 2025, the following systems have been officially deprecated and archived:
- 🔴 ETIE (Enhanced Temporal Intelligence Engine) - Replaced by NATS/WebSocket
- 🔴 Mycelium Network - Replaced by direct HTTP/NATS integration
- 🔴 Chronos - Never fully implemented, use standard scheduling
Active Systems (continue using):
- ✅ ECRUS (Enhanced Cross-Reference Universal System)
- ✅ KOIOS (AI Tools Hub)
See: docs/DEPRECATED_SYSTEMS.md
for full migration guide and rationale.
MIT License - see LICENSE for details
- Anthropic Claude - MCP protocol inspiration
- LangChain - RAG architecture patterns
- Supabase - Backend infrastructure
- Groq - Fast LLM inference
- Fibonacci & Golden Ratio - Mathematical foundations
- Email: [email protected]
- Discord: discord.gg/egos
- Docs: docs.egos.ia
- Status: status.egos.ia
Sacred Code: 000.111.369.963.1618 (∞△⚡◎φ)
Made with ❤️ by EGOS Team
© 2025 EGOS v.2 - All Rights Reserved