Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Nameless8243/AI_Security

Repository files navigation

🧠 AI Security Research Vault

Cybersecurity Engineer → AI Security Architect

LinkedIn GitHub Obsidian License: MIT


🧩 A structured, bilingual AI Security learning system —
built to connect AI Safety, Risk Management, and Cybersecurity into one coherent framework.


⚙️ Powered By

  • 🧱 Obsidian 1.9+ for structure and MathJax rendering
  • 🤖 ChatGPT (GPT-5) for assisted content generation
  • 🧩 Markdown + MathJax for explainable documentation
  • 🔐 Security Frameworks: NIST AI RMF, ISO 42001, OWASP ML Top-10, MITRE ATLAS

🌍 Overview

This repository is a personal AI Security learning framework, designed to integrate topics like
AI Safety, Model Risk, Adversarial ML, Governance, PKI, and Cloud Security
into a single, structured, and explainable system.

It’s built as a research-grade learning environment — not a simple note collection,
but a living vault that evolves as I study and experiment.


💡 Purpose

  • 📚 To learn AI Security deeply and systematically (from trust foundations to GenAI threats)
  • 🤖 To connect Cloud Security, PKI, and AI Risk Management into a unified perspective
  • 🧩 To create bilingual educational material — English for clarity, Hungarian for reflection
  • ⚙️ To experiment with AI-assisted automation, Obsidian workflows, and self-improving vaults

🧭 Structure

00_Foundations/                       → Core AI Security concepts, trust, lifecycle, learning path
01_Attack_Taxonomy_and_Threats/       → Threat surfaces, MITRE ATLAS–style taxonomy, attack classes
02_Defenses_and_Mitigations/          → Preventive controls, hardening, robustness, privacy defenses
03_Attack_Detection_and_Response/     → Monitoring, anomaly detection, incident handling for AI systems
04_Secure_Deployment_and_Governance/  → Model release, signing, audit logging, compliance & governance
05_AI_Risk_Management_and_Assurance/  → NIST AI RMF, ISO 42001, risk registers, assurance & controls
06_AI_Safety_and_Ethical_Assurance/   → Fairness, explainability, human-in-the-loop, accountability
07_AI_Security_Automation_and_Metrics/→ Security-as-Code, Policy-as-Code, telemetry, KPIs, automation
08_Generative_AI_Security/            → RAG security, prompt injection, watermarking, GenAI supply chain
99_Glossary/                          → Cross-linked core concepts (EN/HU glossary)

Each section is bilingual (🇬🇧 English + 🇭🇺 Hungarian) and formatted for
Obsidian 1.9+ MathJax — including formulas, lineage links, and visual cues.


🔗 Standards and Frameworks

This Vault aligns with:

  • NIST AI RMF 1.0

  • ISO/IEC 42001:2023

  • EU AI Act – Risk-based Governance

  • MITRE ATLAS & OWASP ML Top 10

  • Zero-Trust for AI Systems


⚖️ Ethics & Compliance

This project contains only educational and research content.
No proprietary, confidential, or organization-specific data is included.
All examples and architectures are theoretical or publicly documented sources.


📈 Learning Roadmap

I’m currently transitioning from PKI & Cloud Security engineering
to a full AI Security & Governance specialization.

Planned progression:

  1. 🧩 Complete theory (AI Security Vault — current stage)

  2. ⚙️ Build lab environments (adversarial testing, model validation, RAG pipelines)

  3. 🧠 Apply for AI Security Engineer / Architect roles

  4. 🧾 Contribute to open-source frameworks and research


🚀 Next Steps

  • Add practical labs (adversarial attack demos, prompt injection tests)

  • Expand Automation & Metrics with CI/CD guardrails

  • Publish bilingual mini-guides for explainability & fairness


🏁 License

Released under the MIT License. See LICENSE file for details.

Copyright (c) 2025 Tibor Kalmár


Supplemental Notice — AI-Assisted Content Disclaimer This repository includes material developed with assistance from large language models (LLMs). All content has been reviewed and edited by the author, but may still contain factual or linguistic inaccuracies. The material is provided for educational and research purposes only and should not be relied upon as professional advice.


✨ Author

Tibor Kalmár — Cybersecurity Engineer (PKI & Cloud Security)
🔐 Transitioning into AI Security Architecture
🌐 GitHub: Nameless8243
💼 LinkedIn: Tibor Kalmár

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published