Course Syllabus: AI Security Issues
Course Code: 02-24-03306
Course Title: AI Security Issues
Instructor: Dr. Mahmoud Ghanem
Email:
[email protected]Credit Hours: 3
Course Description
This course explores the intersection of artificial intelligence (AI) and cybersecurity,
focusing on the unique security challenges posed by AI systems. Students will learn
about cybersecurity fundamentals, the impact of AI on cybersecurity, vulnerabilities in AI
systems (e.g., trapdoors and adversarial attacks), and the use of AI for smart attacks. The
course combines theoretical knowledge with hands-on labs and case studies to prepare
students for real-world AI security challenges.
Course Objectives
By the end of this course, students will:
1. Understand the fundamentals of cybersecurity and its relevance to AI systems.
2. Analyze the effects of AI on cybersecurity, including both defensive and offensive
applications.
3. Identify vulnerabilities in AI systems, such as trapdoors and adversarial attacks.
4. Explore how AI can be used to conduct smart attacks and develop
countermeasures.
5. Apply AI security concepts to real-world scenarios through projects and case
studies.
Prerequisites
02-24-00109: Introduction to Artificial Intelligence
Textbooks and Resources
1. Recommended Textbook:
o "AI in Cybersecurity" by Leslie F. Sikos.
2. Additional Resources:
o "Adversarial Machine Learning" by Yevgeniy Vorobeychik and Murat
Kantarcioglu.
o Research papers and case studies provided by the instructor.
o Online resources from cybersecurity organizations (e.g., NIST, OWASP).
Course Outline
Week 1: Introduction to AI Security Issues
Overview of AI and cybersecurity
Importance of securing AI systems
Course objectives and structure
Week 2: Cybersecurity Fundamentals
Key concepts: Confidentiality, Integrity, Availability (CIA triad)
Common threats: Malware, phishing, DDoS attacks
Security frameworks and best practices
Week 3: Effect of AI on Cybersecurity Issues
AI for cybersecurity: Threat detection, anomaly detection, and automation
AI as a threat: Autonomous attacks, deepfakes, and social engineering
Case study: AI-powered cybersecurity tools
Week 4: Trapdoors and Backdoors in AI Systems
Understanding trapdoors and backdoors in machine learning models
Real-world examples of malicious model manipulation
Hands-on lab: Detecting backdoors in neural networks
Week 5: Adversarial Attacks on AI Systems
Types of adversarial attacks: Evasion, poisoning, and model extraction
Techniques for generating adversarial examples
Hands-on lab: Crafting adversarial attacks on image classifiers
Week 6: Defending Against Adversarial Attacks
Robust machine learning: Adversarial training, defensive distillation
Detection of adversarial inputs
Case study: Defending AI systems in critical infrastructure
Week 7: Smart Attacks by AI Means
AI-powered phishing and social engineering
Autonomous malware and botnets
Case study: AI-driven cyberattacks in the wild
Week 8: AI in Cyber Warfare
Role of AI in state-sponsored attacks
Ethical and legal implications of AI in cyber warfare
Case study: Stuxnet and AI-enhanced cyber weapons
Week 9: Privacy Issues in AI Systems
Data privacy and AI: Risks and challenges
Differential privacy and federated learning
Case study: Privacy breaches in AI-powered applications
Week 10: Securing AI Development Lifecycle
Secure AI development practices
Testing and validation of AI models
Hands-on lab: Implementing secure AI pipelines
Week 11: AI for Cybersecurity Defense
AI-driven threat intelligence and response
Automated incident response systems
Case study: AI in enterprise cybersecurity
Week 12: Emerging Trends in AI Security
Explainable AI (XAI) for security
AI in quantum computing and post-quantum cryptography
Future challenges and opportunities
Week 13: Final Project Presentations
Students present their final projects
Peer and instructor feedback
Assessment and Grading
Assignments, Final Project and Labs: 30%
Midterm Exam: 20%
Final Exam: 50%
Course Policies
1. Attendance: Regular attendance is expected. More than 3 unexcused absences
may result in a grade penalty.
2. Late Submissions: Assignments submitted late will incur a 10% penalty per day.
3. Academic Integrity: Plagiarism or cheating will result in disciplinary action.
4. Communication: Use the course email for all communications. Expect a response
within 24-48 hours.
Final Project
Students will work individually or in teams of 2-3 to research and present a topic related
to AI security. The project will include:
A proposal (Week 3)
A mid-project review (Week 7)
A final presentation and report (Week 13)
This syllabus is subject to change. Updates will be communicated via the course website
and email. Let me know if you need further adjustments or additional details!