Description
Thanks to major technological advances over the past decade, the rapid and widespread deployment of artificial intelligence—now ubiquitous—has introduced new cybersecurity challenges. Many technical risks remain poorly understood and inadequately managed by organizations.
This training course, designed for a technical audience (developers, pentesters, data scientists, etc.), provides the essential skills to identify, exploit, and remediate the most common vulnerabilities in AI-based systems.
Who is this training for ?
For whom ?
- Developers, technical leads, and architects working on artificial intelligence projects.
- Cybersecurity consultants looking to broaden or strengthen their expertise in securing AI systems.
- General knowledge of information systems.
Training objectives
Training program
- Introduction to AI System Security
- Differences between traditional software security and AI security.
- Attack surface of an AI-based system
- Real-world examples of model attacks (e.g., poisoning, model extraction, adversarial inputs).
- Key Threats and Vulnerabilities
- Types of attacks specific to AI.
- Vulnerabilities in supervised and unsupervised learning models.
- Training data security and confidentiality.
- Regulatory and Ethical Frameworks
- GDPR compliance and data privacy.
- EU Artificial Intelligence Act (AI Act).
- Concepts of fairness, explainability, and transparency.
- Securing the AI Development Lifecycle (Secure MLOps)
- Integrating security into CI/CD pipelines for AI models.
- Continuous monitoring and model updates.
- Logging and traceability of predictions.
- Secure Coding Best Practices in AI Projects
- Secure management of inference API.
- Securing dependencies and ML libraries.
- Strengthening authentication and access controls for models.
- Tools and Testing to Secure AI Systems
- Introduction to AI auditing tools (Adversarial Robustness Toolbox, IBM ART, etc.).
- Vulnerability scanners specific to ML environment.
- Best practices for adversarial testing.