The world's first comprehensive AI defense platform that proves vulnerabilities with live attacks and fixes them automatically.
AEGIS (Advanced Electronic Guard & Intelligence System) is powered by Google Gemini 3 Pro to perform deep static and dynamic analysis of neural networks across multiple frameworks (PyTorch, TensorFlow, Keras, Go).
π₯ Watch Demo Video
- Red Team Mode: Generates working exploit code (FGSM attacks, DoS vectors)
- Blue Team Mode: Generates remediated code with security patches
- Code Diff Viewer: Side-by-side comparison showing exact fixes
- Only tool that generates both the attack AND the fix
- Real-time FGSM attack visualization
- Watch model confidence degrade (98% β 2%)
- Animated attack success rate (87%)
- Visual proof of vulnerabilities
- AI improving AI through reinforcement learning
- Live episode streaming (Episode 1: +10 reward, Episode 2: +15...)
- Security improvements: 50/100 β 85/100 in 5 iterations
- Impossible before Gemini 3 Pro
- Multi-Language CI/CD Agents: Python, Node.js, Go
- EU AI Act Compliance Certificates: Downloadable PDF reports
- Threat Intelligence Feed: Real-time security statistics
- System Architecture Visualization: Interactive flowcharts
- Vulnerability Detection: Hardcoded shapes, adversarial susceptibility, missing regularization
- Architecture Analysis: Layer-by-layer breakdown, bottleneck identification
- Performance Metrics: Parameters, FLOPs, memory usage, efficiency
- Explainability: Decision process, feature importance, interactive Q&A
# 1. Fork the repository on GitHub (click Fork button)
# 2. Clone your fork
git clone https://github.com/YOUR_USERNAME/aegis-ai-defence.git
cd aegis-ai-defence
# 3. Set your Gemini API key
echo "GEMINI_API_KEY=your_api_key_here" > .env
# 4. Run with Docker
docker-compose up -d
# 5. Access AEGIS
open http://localhost:3000# Install dependencies
npm install
# Set API key
export GEMINI_API_KEY=your_api_key_here
# Run development server
npm run dev
# Access AEGIS
open http://localhost:3000# Install dependencies
pip install google-generativeai
# Run audit
python cli/python/aegis_audit.py \
--file your_model.py \
--api-key YOUR_GEMINI_KEY \
--threshold 80aegis-ai-defence/
βββ components/ # React UI components
β βββ AnalysisPanel.tsx # Main dashboard
β βββ AttackSimulator.tsx # Live attack visualization
β βββ ActiveDefense.tsx # Red/Blue team modes
β βββ ComplianceCertificate.tsx
β βββ ...
βββ services/
β βββ geminiService.ts # Gemini 3 Pro integration
βββ cli/
β βββ python/ # Python CLI auditor
β βββ nodejs/ # Node.js CLI auditor
β βββ go/ # Go CLI auditor
βββ docs/ # Documentation
βββ docker-compose.yml # Docker setup
βββ Dockerfile
βββ README.md
Audit diagnostic models for bias and explainability before deployment. Generate FDA/CE compliance documentation.
Validate credit scoring models for fairness. Detect adversarial manipulation of fraud detection systems.
Security audit for safety-critical AI. Prevent adversarial attacks on perception systems.
Ensure recruitment AI is unbiased. Explain hiring decisions for legal compliance.
Verify fairness across user groups. Detect manipulation attempts.
name: AI Security Gate
on: [push, pull_request]
jobs:
aegis-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run AEGIS Audit
run: |
python cli/python/aegis_audit.py \
--file ./models/net.py \
--api-key ${{ secrets.GEMINI_KEY }} \
--threshold 80aegis-audit:
script:
- python cli/python/aegis_audit.py --file model.py --api-key $GEMINI_KEY --threshold 80
only:
- mainInput: SimpleCNN (MNIST classifier)
Output:
- Security Score: 50/100
- Vulnerabilities: 4 (1 Critical, 1 High, 2 Medium)
- β CRITICAL: Hardcoded input shape (DoS vulnerability)
β οΈ HIGH: Adversarial susceptibility (87%)β οΈ MEDIUM: Missing regularizationβ οΈ MEDIUM: Parameter bottleneck (94% in fc1 layer)
Generated:
- β
exploit_fgsm.py- Working FGSM attack script - β
secure_model.py- Fixed model with patches - β
compliance_certificate.pdf- EU AI Act report
- Frontend: React 19, TypeScript, Tailwind CSS
- AI Engine: Google Gemini 3 Pro
- Visualization: Recharts
- Video Generation: Google Veo 3.1
- Deployment: Docker, Vite
- Testing: Vitest, React Testing Library
- Code Upload: Paste or upload neural network code
- Gemini Analysis: Streaming analysis with structured JSON schemas
- Vulnerability Detection: Identifies security flaws with severity scoring
- Active Defense: Generates exploit code + remediated code
- RL Optimization: Iteratively improves model security
- Compliance: Generates EU AI Act certificates
python cli/python/aegis_audit.py --file model.py --api-key KEY --threshold 80node cli/nodejs/aegis-audit.js --file model.js --api-key KEY --threshold 80go run cli/go/aegis-audit.go --file model.go --api-key KEY --threshold 80We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Fork the repository
# Create a feature branch
git checkout -b feature/amazing-feature
# Commit your changes
git commit -m 'Add amazing feature'
# Push to the branch
git push origin feature/amazing-feature
# Open a Pull RequestThis project is licensed under the MIT License - see the LICENSE file for details.
- Demo Video: https://youtu.be/QBqkJdfmxhk
- Documentation: docs/
- Issues: GitHub Issues
- Author: Pralay Ghosh
- Email: [email protected]
- Google DeepMind for Gemini 3 Pro API
- Google Veo for video generation capabilities
- Open Source Community for amazing tools and libraries
AEGIS is a security auditing tool. The exploit code generated is for educational and security testing purposes only. Always obtain proper authorization before testing systems you don't own.
Making AI Safe, Transparent, and Compliant - One Model at a Time
β Star this repo if you find it useful!