2. Security for AI Blueprint Overview

Introduction

In the previous section, you learned that AI security requires an integrated approach – DevSecOps adapted for AI, threat modeling for LLM deployments, and a shared responsibility model between providers and enterprises. But frameworks and principles only take you so far. What you need is a structured architecture that maps every defense control to a specific protection domain, so that when someone asks “are we protected against data poisoning?” you can point to a layer, name the controls, and show how they connect to the rest of the stack.

That architecture is the Security for AI Blueprint – a 6-layer defense framework that organizes AI security controls from data at the foundation through zero-day defense at the top. Think of it as the GPS for your defense journey: it shows you where you are, what’s protecting each domain, and how the layers work together to provide defense in depth.

What will I get out of this?

By the end of this section, you will be able to:

  1. Describe all six layers of the Security for AI Blueprint and explain what each layer protects.
  2. Apply defense-in-depth principles to AI systems, explaining why multiple overlapping layers are essential.
  3. Explain how Blueprint layers interrelate, showing that data security informs access controls and infrastructure supports all other layers.
  4. Describe how Trend Vision One provides unified visibility across all six layers through a single platform.

The Security for AI Blueprint

The Security for AI Blueprint is a defense-in-depth framework that organizes AI security into six layers. Each layer addresses a distinct protection domain, and together they provide full-stack coverage from the data that feeds AI systems through the runtime environment that defends against novel threats.

The layers are designed to be complementary – no single layer is sufficient on its own, and each layer strengthens the others. An attacker who bypasses one layer still faces multiple additional defenses before reaching their objective.

The 6-Layer Architecture

graph TB
    subgraph Blueprint["Security for AI Blueprint"]
        L6["<b>Layer 6</b><br/>Defend Against Zero-Day Exploits<br/><small>Network IDS/IPS, Virtual Patching,<br/>Behavioral Anomaly Detection</small>"]
        L5["<b>Layer 5</b><br/>Secure Access to AI Services<br/><small>AI Gateway, ZTSA, Prompt/Response<br/>Filtering, Rate Limiting</small>"]
        L4["<b>Layer 4</b><br/>Secure Your Users<br/><small>Deepfake Detection, Endpoint Security,<br/>Shadow AI Governance</small>"]
        L3["<b>Layer 3</b><br/>Secure Your AI Infrastructure<br/><small>AI-SPM, Posture Management,<br/>Misconfiguration Detection</small>"]
        L2["<b>Layer 2</b><br/>Secure Your AI Models<br/><small>Container Security, Vulnerability Scanning,<br/>Model Integrity Verification</small>"]
        L1["<b>Layer 1</b><br/>Secure Your Data<br/><small>DSPM, Data Classification,<br/>Vector Store Security</small>"]
    end

    L6 --- L5 --- L4 --- L3 --- L2 --- L1

    TV1["<b>Trend Vision One</b><br/>(Unified Platform)<br/><small>Single-pane visibility across<br/>all six layers</small>"]

    TV1 -.->|"Monitors"| L6
    TV1 -.->|"Enforces"| L5
    TV1 -.->|"Protects"| L4
    TV1 -.->|"Manages"| L3
    TV1 -.->|"Scans"| L2
    TV1 -.->|"Classifies"| L1

    style L6 fill:#2d5016,color:#fff
    style L5 fill:#2d5016,color:#fff
    style L4 fill:#2d5016,color:#fff
    style L3 fill:#2d5016,color:#fff
    style L2 fill:#2d5016,color:#fff
    style L1 fill:#2d5016,color:#fff
    style TV1 fill:#1C90F3,color:#fff

The layers stack logically from bottom to top: data is the foundation everything depends on, models sit on top of data, infrastructure hosts the models, users interact with the infrastructure, access services control how users and systems reach AI capabilities, and zero-day defense provides the last line of protection against novel threats.


Layer-by-Layer Preview

Each of the next six sections (Sections 3-8) covers one Blueprint layer in depth. The cards below summarize what each layer protects, why it matters, and the key controls it provides.

  • Layer 1: Secure Your Data

    Protects: Training data, fine-tuning datasets, RAG corpora, vector stores, conversation logs, model embeddings, and data in transit.

    Data is the foundation of every AI system. Poisoned training data teaches the model the wrong things. Manipulated RAG corpora feed it false information. Leaked conversation logs compromise user privacy.

    Key controls: DSPM, data classification by sensitivity tier, vector store access controls, data lineage tracking, encryption at rest and in transit.

  • Layer 2: Secure Your AI Models

    Protects: Model weights, model containers, fine-tuning pipelines, LoRA adapters, model artifacts, and model distribution channels.

    Models are the intellectual property of AI deployments. A compromised model can contain backdoors, leak training data, or execute malicious code. Supply chain attacks on model distribution affect every deployment downstream.

    Key controls: Container security, vulnerability scanning, model integrity verification, artifact signing, serialization safety checks, secure model registries.

  • Layer 3: Secure Your AI Infrastructure

    Protects: Cloud AI resources, GPU clusters, model serving infrastructure, orchestration platforms, API backends, and configuration management.

    Infrastructure misconfigurations are one of the most common entry points for attackers. An over-permissioned GPU cluster, an unmonitored API endpoint, or a misconfigured model serving container can expose the entire AI stack.

    Key controls: AI-SPM, misconfiguration detection, risk insights, identity management, infrastructure monitoring, posture dashboards.

  • Layer 4: Secure Your Users

    Protects: End users interacting with AI systems, employees using AI tools, stakeholders consuming AI-generated content, and the human-AI trust relationship.

    Users are both consumers and potential victims of AI systems. Deepfake content can manipulate decision-making. Shadow AI adoption can expose sensitive data to unvetted services. Over-trusting AI output leads to acting on misinformation.

    Key controls: Deepfake detection, endpoint security, shadow AI discovery and governance, user behavior analytics, AI content labeling, human-in-the-loop enforcement.

  • Layer 5: Secure Access to AI Services

    Protects: AI service endpoints, API gateways, prompt and response channels, authentication and authorization for AI features, and the user-to-AI communication path.

    This is where most runtime attacks are intercepted. Prompt injection, jailbreaking, system prompt leaking, and output exploitation all pass through the access layer. Effective access controls stop attacks before they reach the model.

    Key controls: AI Gateway, ZTSA, prompt filtering, response filtering, rate limiting, input validation, API key management, cost monitoring.

  • Layer 6: Defend Against Zero-Day Exploits

    Protects: The entire AI stack against novel, previously unknown attack techniques that bypass existing controls.

    AI security is an adversarial domain – attackers continuously develop new techniques. Zero-day defense ensures that even when specific controls are not yet updated, behavioral anomaly detection and threat intelligence catch novel threats.

    Key controls: Network IDS/IPS, virtual patching, behavioral anomaly detection, zero-day threat intelligence, runtime behavioral analysis, automated incident response triggers.


Defense in Depth: Why Multiple Layers Matter

Defense in depth is the principle that no single security control should be the only thing standing between an attacker and their objective. Each layer provides independent protection, so that even if one layer is bypassed, the attacker faces additional barriers.

How an Attack Must Penetrate Multiple Layers

graph LR
    ATK["Attacker<br/><small>Prompt Injection</small>"]
    L6C["Layer 6<br/><small>Anomaly Detection<br/>flags unusual pattern</small>"]
    L5C["Layer 5<br/><small>Prompt Filter<br/>blocks injection payload</small>"]
    L3C["Layer 3<br/><small>AI-SPM<br/>monitors posture</small>"]
    L1C["Layer 1<br/><small>Data Classification<br/>prevents PII exposure</small>"]
    TARGET["Target:<br/>Sensitive Data"]

    ATK -->|"Attack"| L6C
    L6C -->|"If bypassed"| L5C
    L5C -->|"If bypassed"| L3C
    L3C -->|"If bypassed"| L1C
    L1C -->|"If bypassed"| TARGET

    BLOCK1["BLOCKED"]
    BLOCK2["BLOCKED"]
    BLOCK3["BLOCKED"]
    BLOCK4["BLOCKED"]

    L6C -.->|"Detected"| BLOCK1
    L5C -.->|"Filtered"| BLOCK2
    L3C -.->|"Alert"| BLOCK3
    L1C -.->|"Access Denied"| BLOCK4

    style ATK fill:#8b0000,color:#fff
    style TARGET fill:#cc7000,color:#fff
    style L6C fill:#2d5016,color:#fff
    style L5C fill:#2d5016,color:#fff
    style L3C fill:#2d5016,color:#fff
    style L1C fill:#2d5016,color:#fff
    style BLOCK1 fill:#2d5016,color:#fff
    style BLOCK2 fill:#2d5016,color:#fff
    style BLOCK3 fill:#2d5016,color:#fff
    style BLOCK4 fill:#2d5016,color:#fff

In a single-layer defense, bypassing the prompt filter means the attack succeeds. In a defense-in-depth architecture, the attacker must defeat anomaly detection, prompt filtering, posture monitoring, AND data access controls. Even a sophisticated attacker who bypasses one or two layers is still likely to be caught by the remaining defenses.

Defense Connection

Defense in depth is why the prompt injection techniques and jailbreaking methods covered in Chapter 2 are not automatic game-overs in a well-architected system. Layer 5’s prompt filtering is the first line of defense, but Layer 6’s anomaly detection, Layer 3’s posture monitoring, and Layer 1’s data classification provide backup layers that can catch what prompt filtering misses.


How the Layers Interrelate

The six Blueprint layers are not isolated silos. They share data, inform each other’s policies, and create feedback loops that strengthen the overall defense posture.

Relationship How It Works
Layer 1 (Data) informs Layer 5 (Access) Data classification determines what sensitivity levels require stricter prompt/response filtering. If RAG corpora contain PII, Layer 5 applies more aggressive output redaction.
Layer 3 (Infrastructure) supports all layers Infrastructure posture management (AI-SPM) monitors the health of every component – from vector stores (Layer 1) to API gateways (Layer 5) to IDS sensors (Layer 6).
Layer 5 (Access) feeds Layer 6 (Zero-Day) Blocked prompt injection attempts and filtered responses generate threat intelligence that Layer 6 uses for behavioral anomaly baselines.
Layer 2 (Models) depends on Layer 1 (Data) Model integrity starts with data integrity. A model trained on poisoned data (Layer 1 failure) cannot be “fixed” by model scanning alone (Layer 2).
Layer 4 (Users) connects to Layer 5 (Access) User identity and behavior analytics from Layer 4 inform access policies in Layer 5. Anomalous user behavior triggers stricter filtering.
Layer 6 (Zero-Day) protects all layers Behavioral anomaly detection and virtual patching can catch novel attacks targeting any layer – even before specific controls are updated.

The key insight: the Blueprint is a system, not a checklist. The layers work together, and the connections between them are as important as the controls within each layer.

Defense Connection

Layer 5’s prompt filtering directly addresses the prompt injection techniques covered in Chapter 2, while Layer 1’s data access controls defend against the data poisoning and RAG poisoning attacks from Chapter 2 Section 3. The interrelationship means that a prompt injection that attempts to extract poisoned RAG data must defeat both layers.


Trend Vision One: The Unified Platform

After understanding the 6-layer architecture, a natural question emerges: how do you manage six different layers without creating six different management consoles and six different alert streams?

Trend Vision One provides a unified platform that integrates all six Blueprint layers into a single-pane-of-glass view. Rather than managing data security, model scanning, infrastructure posture, endpoint protection, access controls, and threat detection as separate products, Vision One correlates signals across all layers to provide holistic visibility. Vision One’s centralized dashboard correlates alerts across all six layers, enabling security teams to trace an attack from initial access (Layer 5 prompt filter alert) through infrastructure impact (Layer 3 posture change) to data exposure risk (Layer 1 classification alert) – all in one investigation timeline. This unified approach reduces mean time to detection and eliminates the visibility gaps that arise when layers are managed independently.

Defense Connection

The Microsoft LLMjacking case from Chapter 2 illustrates why unified visibility matters. The attackers exploited stolen API credentials (a Layer 5 access control failure) to consume compute resources (a Layer 3 infrastructure concern) at the victim’s expense. With Vision One’s cross-layer correlation, the access anomaly and the infrastructure cost spike would appear in the same investigation timeline – enabling rapid detection rather than waiting for the monthly bill.

Key Takeaways
  • The Security for AI Blueprint organizes AI defense into six layers: Data, Models, Infrastructure, Users, Access, and Zero-Day – each addressing a distinct protection domain
  • Defense in depth ensures that no single layer failure leads to compromise, as attackers must defeat multiple independent defenses
  • The six layers interrelate through shared data, feedback loops, and cross-layer dependencies that strengthen the overall defense posture
  • Trend Vision One provides unified visibility across all six Blueprint layers through a single platform, enabling cross-layer threat correlation

Test Your Knowledge

Ready to test your understanding of the Security for AI Blueprint? Head to the quiz to check your knowledge.


Up next

Now that you understand the 6-layer framework and how the layers work together, it’s time to dive into each layer in detail. We start at the foundation: Layer 1 – Secure Your Data. You’ll learn about DSPM, data classification, vector store security, and how protecting data protects everything built on top of it.