課程簡介

Introduction to AI Threat Modeling

  • What makes AI systems vulnerable?
  • AI attack surface vs traditional systems
  • Key attack vectors: data, model, output, and interface layers

Adversarial Attacks on AI Models

  • Understanding adversarial examples and perturbation techniques
  • White-box vs black-box attacks
  • FGSM, PGD, and DeepFool methods
  • Visualizing and crafting adversarial samples

Model Inversion and Privacy Leakage

  • Inferring training data from model output
  • Membership inference attacks
  • Privacy risks in classification and generative models

Data Poisoning and Backdoor Injections

  • How poisoned data influences model behavior
  • Trigger-based backdoors and Trojan attacks
  • Detection and sanitization strategies

Robustness and Defense Techniques

  • Adversarial training and data augmentation
  • Gradient masking and input preprocessing
  • Model smoothing and regularization techniques

Privacy-Preserving AI Defenses

  • Introduction to differential privacy
  • Noise injection and privacy budgets
  • Federated learning and secure aggregation

AI Security in Practice

  • Threat-aware model evaluation and deployment
  • Using ART (Adversarial Robustness Toolbox) in applied settings
  • Industry case studies: real-world breaches and mitigations

Summary and Next Steps

最低要求

  • An understanding of machine learning workflows and model training
  • Experience with Python and common ML frameworks such as PyTorch or TensorFlow
  • Familiarity with basic security or threat modeling concepts is helpful

Audience

  • Machine learning engineers
  • Cybersecurity analysts
  • AI researchers and model validation teams
 14 時間:

人數


每位參與者的報價

Provisional Upcoming Courses (Require 5+ participants)

課程分類