<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Chapter 3: Protecting LLMs from Attacks :: Introduction to AI Security</title>
    <link>https://example.org/chapter3/index.html</link>
    <description>Is this Chapter for You? Every attack you studied in Chapter 2 – from prompt injection to data poisoning to agentic exploitation – has a corresponding defense. Knowing the attacks is essential, but it’s only half the picture. Organizations need professionals who can translate attack knowledge into security architectures, policies, and operational controls that actually stop threats in production.</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/chapter3/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>1. Integrating Security into AI Architectures</title>
      <link>https://example.org/chapter3/s1/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s1/index.html</guid>
      <description>Introduction Every attack you studied in Chapter 2 shares a common thread: they exploit gaps where security wasn’t designed in from the start. Prompt injection succeeds when input validation is an afterthought. Data poisoning persists when training pipelines lack integrity checks. Model theft goes undetected when monitoring is bolted on rather than built in. The attacks are varied, but the root cause is the same – security was treated as a feature to add later, not a principle to build around.</description>
    </item>
    <item>
      <title>2. Security for AI Blueprint Overview</title>
      <link>https://example.org/chapter3/s2/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s2/index.html</guid>
      <description>Introduction In the previous section, you learned that AI security requires an integrated approach – DevSecOps adapted for AI, threat modeling for LLM deployments, and a shared responsibility model between providers and enterprises. But frameworks and principles only take you so far. What you need is a structured architecture that maps every defense control to a specific protection domain, so that when someone asks “are we protected against data poisoning?” you can point to a layer, name the controls, and show how they connect to the rest of the stack.</description>
    </item>
    <item>
      <title>3. Layer 1: Secure Your Data</title>
      <link>https://example.org/chapter3/s3/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s3/index.html</guid>
      <description>Introduction Data is the foundation of every AI system. The training data that shapes a model’s knowledge, the RAG corpora that ground its responses in facts, the vector stores that enable semantic retrieval, the conversation logs that carry user interactions – every component of an AI deployment depends on data, and compromised data means compromised everything.</description>
    </item>
    <item>
      <title>4. Layer 2: Secure Your AI Models</title>
      <link>https://example.org/chapter3/s4/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s4/index.html</guid>
      <description>Introduction AI models are both an organization’s most valuable intellectual property and one of its most vulnerable attack surfaces. A fine-tuned model encodes months of training, proprietary data, and domain expertise into a single artifact – an artifact that can be poisoned, stolen, or weaponized if left unprotected. In Chapter 2, you saw how supply chain attacks inject malicious code through model files, how serialization exploits turn model loading into remote code execution, and how agentic supply chain vulnerabilities compromise the tools that deliver models to production.</description>
    </item>
    <item>
      <title>5. Layer 3: Secure Your AI Infrastructure</title>
      <link>https://example.org/chapter3/s5/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s5/index.html</guid>
      <description>Introduction AI infrastructure is not just “servers that run models.” It is a uniquely complex ecosystem of GPU clusters, model serving endpoints, vector databases, orchestration layers, API gateways, and monitoring systems – each with its own attack surface, and all interconnected in ways that traditional infrastructure security was never designed to handle.&#xA;In Chapter 2, you saw how infrastructure attacks exploit container escapes, GPU memory vulnerabilities, and API security gaps. You saw how tool misuse turns legitimate AI orchestration capabilities into attack vectors. And you saw how identity and privilege abuse chains together legitimate permissions into full system compromise. Layer 3 of the Security for AI Blueprint addresses these threats through a new discipline: AI Security Posture Management.</description>
    </item>
    <item>
      <title>6. Layer 4: Secure Your Users</title>
      <link>https://example.org/chapter3/s6/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s6/index.html</guid>
      <description>Introduction The human element in AI security presents a dual challenge. On one side, users are targets – AI-powered deepfakes, AI-generated phishing, and synthetic voice cloning make social engineering attacks more convincing than ever. On the other side, users are a source of risk – employees adopting unauthorized AI tools, pasting proprietary data into public chatbots, and trusting AI outputs without verification. Layer 4 of the Security for AI Blueprint addresses both dimensions: protecting users FROM AI-powered attacks and protecting the organization FROM users’ uncontrolled AI adoption.</description>
    </item>
    <item>
      <title>7. Layer 5: Secure Access to AI Services</title>
      <link>https://example.org/chapter3/s7/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s7/index.html</guid>
      <description>Introduction This is where security meets the AI interface directly. Every prompt typed by a user, every response generated by a model, every API call to an AI service, every tool invocation by an agent – all of it passes through the access layer. Layer 5 of the Security for AI Blueprint is the gatekeeper that inspects, filters, authenticates, and controls every interaction between users (or agents) and AI services.</description>
    </item>
    <item>
      <title>8. Layer 6: Defend Against Zero-Day Exploits</title>
      <link>https://example.org/chapter3/s8/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s8/index.html</guid>
      <description>Introduction Zero-day attacks exploit vulnerabilities that no one knows about yet. There is no patch because the vulnerability hasn’t been disclosed. There is no signature because no security tool has seen the attack before. There is no rule because no one has written one. For AI systems, zero-day threats include novel prompt injection techniques that bypass existing filters, undiscovered vulnerabilities in model serving frameworks, unprecedented attack patterns against agentic tools, and exploitation chains that combine AI-specific weaknesses in ways no one has tested.</description>
    </item>
    <item>
      <title>9. The AI Application Security Continuous Loop</title>
      <link>https://example.org/chapter3/s9/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s9/index.html</guid>
      <description>Introduction The Blueprint layers define what to protect. Layer by layer, you’ve built a defense architecture that covers data, models, infrastructure, users, access services, and zero-day threats. But security is not a one-time deployment – it’s a continuous cycle. Vulnerabilities evolve, attack techniques advance, and the AI systems you deploy today will face threats that didn’t exist when you first assessed them.</description>
    </item>
    <item>
      <title>10. The LEARN Architecture</title>
      <link>https://example.org/chapter3/s10/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s10/index.html</guid>
      <description>Introduction The Blueprint tells security teams what infrastructure to deploy. Layer by layer, it maps the controls that protect data, models, infrastructure, users, access services, and zero-day threats. But the Blueprint is infrastructure-centric – it answers “what should the platform do?” It doesn’t directly answer “how should I write my AI application to be secure?”&#xA;Developers building AI applications need their own framework. They need to know how to validate inputs, how to constrain what agents can do, how to prevent data leakage from the code they write. The LEARN mnemonic organizes five key application-level defense practices that complement the infrastructure-focused Blueprint. Where the Blueprint protects the stack from the outside, LEARN hardens the application from the inside.</description>
    </item>
    <item>
      <title>11. Building an AI Security Culture</title>
      <link>https://example.org/chapter3/s11/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter3/s11/index.html</guid>
      <description>Introduction Technology alone cannot secure AI systems. The most sophisticated Blueprint layer is useless if the organization’s culture doesn’t support security practices. The most carefully hardened system prompt fails if developers bypass it under deadline pressure. The most comprehensive AI Gateway is irrelevant if the security team doesn’t know how to respond when it detects an attack.</description>
    </item>
  </channel>
</rss>