References

A consolidated list of all standards, frameworks, whitepapers, research papers, and tools referenced throughout the three chapters of the AI Security course. Grouped by type for easy reference.


Standards & Frameworks

  • OWASP Top 10 for LLM Applications (2025) – Industry-standard vulnerability taxonomy for LLM-specific security risks. Primary framework used in Chapter 2 for mapping the AI attack surface.
  • OWASP Top 10 for Agentic AI Applications (2026) – Companion framework mapping security risks specific to AI agents and multi-agent systems. Used in Chapter 2, Section 5 for agentic attack vectors.
  • MITRE ATLAS – Adversarial Threat Landscape for AI Systems, mapping adversarial tactics and techniques against machine learning systems with real-world case studies.
  • NIST AI Risk Management Framework (AI RMF) – Federal framework for managing AI system risks across the lifecycle, organized around Govern, Map, Measure, and Manage functions.
  • EU AI Act – European regulatory framework establishing risk-based requirements for AI systems, with obligations varying from minimal-risk transparency to prohibited practices.

Whitepapers

Research & Case Studies

  • PoisonedRAG (Zou et al., 2024) – Research demonstrating backdoor attacks against RAG systems using as few as 5 poisoned documents to achieve over 90% attack success rate. Referenced in Chapter 2, Section 3.
  • ChatGPT Memory Exploitation (Johann Rehberger, 2024) – Demonstrated persistent data exfiltration through ChatGPT’s long-term memory feature via indirect prompt injection. Referenced in Chapter 2, Section 2.
  • GitHub Copilot CVE-2025-53773 – Prompt injection vulnerability in AI code completion tools via crafted repository content, demonstrating supply chain risks in AI-assisted development. Referenced in Chapter 2, Section 2.
  • n8n CVE-2025-68613 – Server-side request forgery (SSRF) vulnerability in the n8n workflow automation platform, illustrating infrastructure-level risks in AI orchestration tools. Referenced in Chapter 2, Section 4.

Tools & Platforms

  • n8n – Open-source workflow automation platform used for hands-on lab exercises throughout the course, providing a visual interface for building AI-powered workflows.
  • Trend Vision One – Unified cybersecurity platform integrating the Security for AI Blueprint defense layers into a single operational console.
  • OpenAI API – API platform for accessing GPT models, used in course lab exercises for prompt engineering and inference techniques.