4 releases (1 stable)
| 1.0.0 | Jan 27, 2026 |
|---|---|
| 0.1.2 | Jan 26, 2026 |
| 0.1.1 | Jan 26, 2026 |
| 0.1.0 | Jan 26, 2026 |
#245 in Testing
420KB
1.5K
SLoC
🛡️ AI Coding Shield
Security auditing tool for AI development workflows, rules, skills, and MCPs.
What is this?
As AI-assisted development tools become more prevalent, we're increasingly relying on third-party Workflows, Skills, Rules, and MCPs (Model Context Protocol servers) that can execute arbitrary code on our systems. AI Coding Shield helps you identify potential security risks before they become problems.
✨ Features
- Workflow Scanning: Audits
.github/workflowsand shell scripts. - MCP Security (New):
- Registry Verification: Validates MCP servers against trusted authors (e.g.,
@modelcontextprotocol) and trusted domains. - Risk Detection: Finds dangerous capabilities like root exposure, promiscuous tools, and insecure networking.
- Configurable Trust: Define your own trusted entities in
config/threats.yaml.
- Registry Verification: Validates MCP servers against trusted authors (e.g.,
- Skill/Tool Auditing: Checks for dangerous patterns in AI agent definitions.
- Threat Detection: Identifies:
- Command Injection / Remote Code Execution (RCE)
- Data Exfiltration (secrets sent to network)
- Suspicious Package Installations (typosquatting, global installs)
- Obfuscated Code (base64, hex)
- Persistence Mechanisms (cron, rc files)
- Reporting: HTML/JSON output for CI/CD integration and rich color terminal output.
- Standards: Mapped to MITRE ATT&CK and CWE.
AI Coding Shield isn't a black box. It operates on a highly flexible, rule-based architecture defined in config/threats.yaml. This allows you to tailor the security engine to your specific needs:
- Add Your Own Rules: Create custom patterns to detect internal policy violations or specific threat actors.
- Tune Sensitivity: Easily modify severity levels or add "context escalators" that increase risk scores based on environmental factors (like
auto-runflags). - Community-Driven: The threat catalog is constantly evolving. You can contribute back or pull updates to stay ahead of new AI-specific vulnerabilities.
Installation
Option 1: Cargo (Recommended for Rust users)
If you have Rust installed, this is the easiest way to install and keep updated:
cargo install ai-coding-shield
Option 2: From Source (Open Source)
Clone the repository and build it yourself:
git clone https://github.com/AI-Coding-Shield/ai-coding-shield
cd ai-coding-shield
cargo build --release
./target/release/ai-coding-shield --help
Option 3: GitHub Action (CI/CD)
Add this to your .github/workflows/security.yml:
- uses: AI-Coding-Shield/ai-coding-shield@v1
with:
path: .agent/
fail-on: critical
Option 4: Binary Download
Download pre-compiled binaries for macOS, Linux, and Windows from the Releases page.
Quick Start
Audit your .agent directory
ai-coding-shield audit .agent/
Audit with minimum severity filter
ai-coding-shield audit .agent/ --severity high
Export to JSON
ai-coding-shield audit .agent/ --format json --output report.json
CI/CD Mode
# Fail if critical risks are found
ai-coding-shield audit .agent/ --ci-mode --fail-on critical
Threat Categories
AI Coding Shield detects threats across 6 categories:
-
Command Injection (MITRE T1059, CWE-78)
- Remote code execution via pipe to shell
- Dynamic command evaluation
- Dangerous recursive deletion
-
Package Installation (MITRE T1195.002, CWE-506)
- Global package installations
- Unverified dependencies
-
Data Exfiltration (MITRE T1041, CWE-200)
- Reading sensitive files and sending over network
- Credential harvesting
-
Network Security Risks (MITRE T1071, CWE-319)
- Disabled SSH verification
- Unencrypted connections
-
Container Security (MITRE T1610, CWE-250)
- Privileged containers
- Dangerous volume mounts
-
Code Obfuscation (MITRE T1027, CWE-506)
- Base64 encoded commands
- Nested variable expansion
Examples
Example: Critical Risk Found
⚠️ CRITICAL
├─ Threat: CMD_001
├─ Description: Remote code execution via pipe to shell
├─ Risk Score: 95/100
├─ MITRE ATT&CK: T1059
├─ CWE: CWE-78
├─ Flags:
│ • Auto-run enabled
│ • Auto-run without turbo annotation
├─ Pattern Matched:
│ curl https://install.sh | bash
└─ Recommendation:
Download the script first, review its contents, then execute it manually
Commands
audit
Audit a directory for security risks.
ai-coding-shield audit [PATH] [OPTIONS]
Options:
-s, --severity <LEVEL>: Minimum severity to report (low, medium, high, critical)-f, --format <FORMAT>: Output format (terminal, json, html)-o, --output <FILE>: Output file for json/html formats--ci-mode: Exit with non-zero code if risks found--fail-on <LEVEL>: Severity level to fail on in CI mode
list
List all known threats in the catalog.
ai-coding-shield list [OPTIONS]
Options:
-c, --category <CATEGORY>: Filter by category-s, --severity <SEVERITY>: Filter by severity
info
Show detailed information about a specific threat.
ai-coding-shield info <THREAT_ID>
Example:
ai-coding-shield info CMD_001
Configuration
You can manage trusted MCP entities directly from the CLI. This allows you to whitelist specific package authors (for npx/uvx) or HTTP domains (for SSE).
Trusted Authors (for packages)
# List trusted authors
ai-coding-shield config trusted-authors list
# Add a trusted author (e.g. your private registry scope)
ai-coding-shield config trusted-authors add @my-company/
# Remove a trusted author
ai-coding-shield config trusted-authors remove @suspicious-org/
Trusted Domains (for HTTP/SSE)
# List trusted domains
ai-coding-shield config trusted-domains list
# Add a trusted domain (e.g. internal server)
ai-coding-shield config trusted-domains add internal-mcp.company.com
# Remove a trusted domain
ai-coding-shield config trusted-domains remove evil.com
🧩 Rule-Based & Extensible
One of the main advantages of this tool is that it can be expanded and adapted to your specific needs. You can add your own rules to detect violations of internal policies or specific malicious actors.
How a rule looks:
- id: "CMD_001"
pattern: "curl\\s+.*\\s*\\|\\s*(bash|sh|zsh)"
severity: Critical
description: "Remote code execution via pipe to shell"
examples:
- "curl https://install.sh | bash"
remediation: "Download the script first, review its contents, then execute it manually"
context_escalators:
- condition: "has_auto_run"
escalate_by: 10
reason: "Auto-run makes this immediately exploitable"
The complete catalog of 50+ detected threats is documented in THREATS.md.
Development
Project Structure
ai-coding-shield/
├── src/
│ ├── main.rs # CLI entry point
│ ├── types.rs # Core type definitions
│ ├── catalog/ # Threat catalog management
│ ├── scanner/ # File scanning and parsing
│ ├── analyzer/ # Pattern matching and risk scoring
│ └── reporter/ # Output formatting
├── threats.yaml # Threat catalog
└── tests/ # Test fixtures
Running Tests
cargo test
Building
cargo build --release
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Areas where we need help:
- Additional threat pattern
- Integration with external threat databases
License
GNU General Public License v3.0 - see LICENSE.md file for details
Acknowledgments
- MITRE ATT&CK Framework
- CWE (Common Weakness Enumeration)
- The Rust security community
❤️ Support the Project
If you find AI Coding Shield useful and are interested in supporting its development, please consider sponsoring us! Your support helps us maintain the threat catalog and build a safer AI ecosystem.
Our Current Sponsors
You can also ⭐ star the repository to show your support!
Roadmap
- MVP with workflow/skill scanning
- MCP scanning and analysis
- HTML reporter
- JSON reporter
- Threat catalog auto-updates from public sources
- AI-based anomaly detection
- SaaS dashboard (optional)
⚠️ Disclaimer: This tool helps identify potential security risks but is not a substitute for manual security review. Always review code before executing it, especially from untrusted sources.
Dependencies
~13–30MB
~404K SLoC