🧩 A structured, bilingual AI Security learning system —
built to connect AI Safety, Risk Management, and Cybersecurity into one coherent framework.
- 🧱 Obsidian 1.9+ for structure and MathJax rendering
- 🤖 ChatGPT (GPT-5) for assisted content generation
- 🧩 Markdown + MathJax for explainable documentation
- 🔐 Security Frameworks: NIST AI RMF, ISO 42001, OWASP ML Top-10, MITRE ATLAS
This repository is a personal AI Security learning framework, designed to integrate topics like
AI Safety, Model Risk, Adversarial ML, Governance, PKI, and Cloud Security
into a single, structured, and explainable system.
It’s built as a research-grade learning environment — not a simple note collection,
but a living vault that evolves as I study and experiment.
- 📚 To learn AI Security deeply and systematically (from trust foundations to GenAI threats)
- 🤖 To connect Cloud Security, PKI, and AI Risk Management into a unified perspective
- 🧩 To create bilingual educational material — English for clarity, Hungarian for reflection
- ⚙️ To experiment with AI-assisted automation, Obsidian workflows, and self-improving vaults
00_Foundations/ → Core AI Security concepts, trust, lifecycle, learning path
01_Attack_Taxonomy_and_Threats/ → Threat surfaces, MITRE ATLAS–style taxonomy, attack classes
02_Defenses_and_Mitigations/ → Preventive controls, hardening, robustness, privacy defenses
03_Attack_Detection_and_Response/ → Monitoring, anomaly detection, incident handling for AI systems
04_Secure_Deployment_and_Governance/ → Model release, signing, audit logging, compliance & governance
05_AI_Risk_Management_and_Assurance/ → NIST AI RMF, ISO 42001, risk registers, assurance & controls
06_AI_Safety_and_Ethical_Assurance/ → Fairness, explainability, human-in-the-loop, accountability
07_AI_Security_Automation_and_Metrics/→ Security-as-Code, Policy-as-Code, telemetry, KPIs, automation
08_Generative_AI_Security/ → RAG security, prompt injection, watermarking, GenAI supply chain
99_Glossary/ → Cross-linked core concepts (EN/HU glossary)
Each section is bilingual (🇬🇧 English + 🇭🇺 Hungarian) and formatted for
Obsidian 1.9+ MathJax — including formulas, lineage links, and visual cues.
This Vault aligns with:
-
NIST AI RMF 1.0
-
ISO/IEC 42001:2023
-
EU AI Act – Risk-based Governance
-
MITRE ATLAS & OWASP ML Top 10
-
Zero-Trust for AI Systems
This project contains only educational and research content.
No proprietary, confidential, or organization-specific data is included.
All examples and architectures are theoretical or publicly documented sources.
I’m currently transitioning from PKI & Cloud Security engineering
to a full AI Security & Governance specialization.
Planned progression:
-
🧩 Complete theory (AI Security Vault — current stage)
-
⚙️ Build lab environments (adversarial testing, model validation, RAG pipelines)
-
🧠 Apply for AI Security Engineer / Architect roles
-
🧾 Contribute to open-source frameworks and research
-
Add practical labs (adversarial attack demos, prompt injection tests)
-
Expand Automation & Metrics with CI/CD guardrails
-
Publish bilingual mini-guides for explainability & fairness
Released under the MIT License. See LICENSE file for details.
Copyright (c) 2025 Tibor Kalmár
Supplemental Notice — AI-Assisted Content Disclaimer This repository includes material developed with assistance from large language models (LLMs). All content has been reviewed and edited by the author, but may still contain factual or linguistic inaccuracies. The material is provided for educational and research purposes only and should not be relied upon as professional advice.
Tibor Kalmár — Cybersecurity Engineer (PKI & Cloud Security)
🔐 Transitioning into AI Security Architecture
🌐 GitHub: Nameless8243
💼 LinkedIn: Tibor Kalmár