<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI Fundamentals: From Understanding to Implementation :: Introduction to AI Security</title>
    <link>https://example.org/index.html</link>
    <description>A Comprehensive Course for Technical Professionals AI is transforming how we build and deploy software – but with that transformation comes a new attack surface. This hands-on course equips technical professionals with the knowledge to understand AI systems, recognize their vulnerabilities, and defend them in production. From foundational concepts through real-world attacks to layered defense architectures, you’ll build the fluency needed to work with AI securely and effectively.</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Chapter 1: Introduction to AI and LLMs</title>
      <link>https://example.org/chapter1/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/index.html</guid>
      <description>Is this Chapter for You? One of the core needs for technical professionals is to keep up with emerging technologies that are transforming how we build and deploy software. AI and LLMs represent a fundamental shift in what’s possible with code, but they come with their own concepts, terminology, and best practices – it can be overwhelming to know where to start!</description>
    </item>
    <item>
      <title>Chapter 2: Vulnerabilities and Attacks on LLMs</title>
      <link>https://example.org/chapter2/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter2/index.html</guid>
      <description>Is this Chapter for You? Every capability you explored in Chapter 1 – from prompt engineering to RAG pipelines to agentic AI workflows – has a corresponding attack surface. Understanding those attack surfaces isn’t optional for technical professionals building with or deploying AI systems. It’s how you protect your users, your data, and your organization.&#xA;This chapter is designed for technical professionals who need to understand AI attack vectors well enough to explain them, demonstrate them, and ultimately defend against them.</description>
    </item>
    <item>
      <title>Chapter 3: Protecting LLMs from Attacks</title>
      <link>https://example.org/chapter3/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/index.html</guid>
      <description>Is this Chapter for You? Every attack you studied in Chapter 2 – from prompt injection to data poisoning to agentic exploitation – has a corresponding defense. Knowing the attacks is essential, but it’s only half the picture. Organizations need professionals who can translate attack knowledge into security architectures, policies, and operational controls that actually stop threats in production.</description>
    </item>
    <item>
      <title>Course Resources</title>
      <link>https://example.org/resources/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/resources/index.html</guid>
      <description>Quick reference materials for the AI Security course. Use these resources to look up key terms, find cited sources, and explore the standards and tools referenced throughout all three chapters.&#xA;Glossary – Key terms and definitions from all three chapters, organized A-Z with section back-links References – Standards, frameworks, whitepapers, research papers, and tools cited in the course</description>
    </item>
  </channel>
</rss>