<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Section 1 Quiz :: Introduction to AI Security</title>
    <link>https://example.org/chapter3/s1/activity/index.html</link>
    <description>Test Your Knowledge: Integrating Security into AI Architectures Let’s see how much you’ve learned! This quiz tests your understanding of DevSecOps for AI, AI-adapted threat modeling with STRIDE, the shared responsibility model, and how Chapter 2’s attack framework maps to defense strategies.&#xA;--- shuffle_answers: true shuffle_questions: false --- ## An organization has budget to implement AI security in only ONE DevSecOps stage this quarter. Their AI chatbot is already in production with no security controls. Which stage should they prioritize FIRST to get the broadest risk reduction? &gt; Hint: Consider which stage addresses the most attack surface for the lowest effort when an AI system is already deployed, and think about dependency relationships between stages. - [ ] Plan -- threat modeling is the foundation, so it should always come first &gt; Threat modeling is valuable but produces documentation, not runtime protection. For an already-deployed system with zero controls, a planning exercise doesn&#39;t reduce active risk. The system is live and exposed now -- you need controls that intercept attacks in real time. - [ ] Train -- securing the training pipeline prevents future data poisoning &gt; Training pipeline security prevents future poisoning, but the model is already trained and deployed. Securing the training stage protects the next training cycle, not the currently running system. The immediate risk is at the runtime boundary. - [ ] Validate -- red-team testing will identify all the vulnerabilities &gt; Validation testing reveals vulnerabilities but doesn&#39;t stop attacks against the live system. Testing informs what to fix, but without runtime controls, every vulnerability found remains exploitable. Testing is most valuable after you have controls to validate. - [x] Monitor -- runtime protection (AI Guard filters, anomaly detection) provides immediate defense for the live system and generates threat intelligence that informs all other stages &gt; Correct! Monitor is prioritized because it provides the broadest immediate risk reduction for an already-deployed system. Runtime prompt/response filtering and anomaly detection actively intercept attacks happening right now -- unlike Plan (produces documents), Train (protects future cycles), or Validate (identifies but doesn&#39;t block). The judgment framework: (1) coverage breadth -- Monitor addresses injection, data leakage, cost abuse, and anomalous behavior simultaneously; (2) cost-effectiveness -- a single AI Guard deployment covers the entire live attack surface; (3) dependency value -- Monitor generates threat intelligence (blocked attacks, usage patterns) that makes future Plan, Train, and Validate stages more targeted and effective. ## A security architect is performing a threat model for an LLM deployment using STRIDE adapted for AI. Which of the following correctly maps a STRIDE category to an AI-specific threat? &gt; Hint: Review how traditional STRIDE categories translate to AI-specific attack surfaces. - [ ] Spoofing: An attacker poisons the training data to introduce biased outputs &gt; Training data poisoning is Tampering (modifying data), not Spoofing. Spoofing in AI contexts involves impersonation -- agent identity spoofing, deepfake credentials, or impersonating legitimate MCP servers. - [x] Elevation of Privilege: Prompt injection that escalates from text generation to tool execution in an agentic system &gt; Correct! In AI-adapted STRIDE, Elevation of Privilege includes prompt injection that escalates to tool use, agent privilege abuse, and excessive agency exploitation. When an attacker uses injection to make a chatbot execute tools it shouldn&#39;t, that&#39;s a privilege escalation from the text domain to the action domain. - [ ] Information Disclosure: An agent is manipulated into performing unauthorized actions &gt; Unauthorized actions map to Elevation of Privilege or Tampering, not Information Disclosure. AI-specific Information Disclosure covers system prompt leaking, training data extraction, and PII in model responses. - [ ] Denial of Service: An attacker extracts the system prompt from the model &gt; System prompt extraction is Information Disclosure, not Denial of Service. AI-specific Denial of Service covers unbounded consumption, context window stuffing, and GPU exhaustion. ## In the shared responsibility model for AI security, which responsibility falls primarily on the enterprise security team rather than the AI provider? &gt; Hint: Consider what AI providers typically handle versus what enterprises must secure themselves. - [ ] Base model safety training and alignment &gt; Base model safety training is the AI provider&#39;s responsibility. Providers like OpenAI and Anthropic handle alignment, safety training, and content policies for their base models. - [ ] API infrastructure availability and DDoS protection &gt; API infrastructure security is primarily the provider&#39;s responsibility. They maintain uptime, handle DDoS protection, and ensure API availability. - [x] Tool and agent security -- including tool allowlisting, permission scoping, execution monitoring, and agent guardrails &gt; Correct! Tool and agent security falls entirely on the enterprise. AI providers typically don&#39;t provide tool security because they don&#39;t control how organizations configure agentic capabilities. Tool allowlists, permission scoping, execution monitoring, and agent guardrails are all enterprise responsibilities -- and this is one of the most critical gaps in the shared responsibility model. - [ ] Encryption of data in transit for hosted API calls &gt; Encryption in transit for hosted APIs is the provider&#39;s responsibility. Providers implement TLS on their API endpoints. ## A security team wants to map Chapter 2&#39;s attack domains to the Blueprint defense layers. Which mapping correctly connects an attack domain to its primary defense approach? &gt; Hint: Think about which Blueprint layers address which types of threats. - [ ] Prompt-level attacks (Chapter 2 Section 2) are primarily addressed by Layer 1 (Data) through data classification &gt; Layer 1 protects data assets. Prompt-level attacks (injection, jailbreaking) are primarily addressed by Layer 5 (Access) through input/output filtering and Layer 6 (Zero-Day) through behavioral detection. - [x] Agentic attack vectors (Chapter 2 Section 5) are primarily addressed by Layer 3 (Infrastructure) and Layer 5 (Access) through tool controls, execution monitoring, and ZTSA &gt; Correct! Agentic attacks -- tool misuse, privilege escalation, agent hijacking -- require infrastructure-level controls (AI-SPM posture management, orchestration security) from Layer 3 and access-level controls (ZTSA, rate limiting) from Layer 5. These two layers together constrain what agents can do and how they interact with services. - [ ] Model and infrastructure attacks (Chapter 2 Section 4) are primarily addressed by Layer 4 (Users) through user training &gt; Model and infrastructure attacks target technical components, not users. They&#39;re addressed by Layer 2 (Models -- container security) and Layer 3 (Infrastructure -- posture management). - [ ] Output and trust exploitation (Chapter 2 Section 6) are primarily addressed by Layer 6 (Zero-Day) through virtual patching &gt; Output exploitation is primarily addressed by Layer 4 (Users -- protecting humans from misleading outputs) and Layer 5 (Access -- response filtering). Layer 6 handles unknown/novel threats, not established output exploitation patterns. ## When an organization moves from a cloud-hosted AI model to a self-hosted deployment, how does the shared responsibility model change? &gt; Hint: Consider what the cloud provider was handling that now falls on the organization. - [ ] The responsibility stays the same -- self-hosting doesn&#39;t change who secures what &gt; Self-hosting dramatically shifts the responsibility boundary. Functions previously handled by the AI provider now fall on the organization. - [ ] The AI provider becomes responsible for more, since they must support the self-hosted deployment &gt; Self-hosting reduces the provider&#39;s involvement, not increases it. The organization takes on infrastructure security that the provider previously handled. - [x] The enterprise takes on nearly all responsibilities -- including model infrastructure security, weight protection, and even base model safety validation &gt; Correct! Self-hosting shifts the responsibility boundary dramatically. The enterprise must now secure the model serving infrastructure, protect model weights from extraction, validate base model safety for their use case, manage API infrastructure, and handle all the responsibilities that a cloud provider previously covered. This is directly relevant to the infrastructure attacks from Chapter 2 Section 4. - [ ] The security responsibilities are eliminated because the data stays on-premises &gt; Data staying on-premises doesn&#39;t eliminate security responsibilities. It actually increases them because the organization must now secure infrastructure that was previously managed by the cloud provider. ## A development team is deploying a model that classifies customer support tickets. During threat modeling, they identify that an attacker could submit crafted tickets that cause the model to misclassify urgent issues as low-priority. Which STRIDE category best describes this threat? &gt; Hint: Think about what the attacker is changing and what effect it has on the system&#39;s outputs. - [ ] Spoofing -- the attacker is pretending to be a legitimate customer &gt; The attacker may be a legitimate customer. The threat isn&#39;t about identity -- it&#39;s about manipulating the data the system processes to corrupt its output. - [x] Tampering -- the attacker is modifying input data to corrupt the classification output &gt; Correct! Tampering in AI-adapted STRIDE covers modifying data that the AI system processes. By crafting tickets with specific characteristics that exploit classification boundaries, the attacker tampers with the model&#39;s input to produce incorrect outputs. This maps to both training-time tampering (data poisoning) and inference-time tampering (adversarial inputs). - [ ] Denial of Service -- misclassification prevents the system from functioning &gt; Misclassification degrades output quality but doesn&#39;t prevent the system from functioning. Denial of Service in AI contexts involves resource exhaustion, context window stuffing, or GPU monopolization. - [ ] Repudiation -- the attacker denies submitting the malicious tickets &gt; While the attacker might deny their actions, the core threat is the manipulation of classification outputs, not the inability to attribute actions. Repudiation in AI covers audit gaps and missing tool call logs. ## The attack-to-defense mapping table shows that SLM threats from Chapter 2 Section 7 map to Layer 4 (Users) and Layer 6 (Zero-Day). Why do small language model threats specifically require user-layer defenses? &gt; Hint: Consider where SLMs typically run and who interacts with them directly. - [ ] SLMs are less secure than large models and need extra protection layers &gt; The size of the model doesn&#39;t inherently determine the number of protection layers needed. The question is about why user-layer specifically matters for SLMs. - [ ] SLMs can only be attacked through social engineering &gt; SLMs face the same technical attacks as large models (injection, jailbreaking). The user-layer relevance is about deployment context, not attack method limitation. - [x] SLMs often run on endpoints and edge devices, making endpoint security and user behavior monitoring critical for governance &gt; Correct! SLMs are deployed on laptops, mobile devices, and edge hardware -- directly on endpoints where users interact with them. This makes Layer 4 controls (endpoint security, shadow AI governance, user behavior analytics) essential. An SLM running locally on an employee&#39;s laptop bypasses cloud-based security controls entirely, requiring endpoint-level protection and governance. - [ ] Large models don&#39;t need user-layer protection because they run in data centers &gt; All AI systems benefit from user-layer protection. The distinction is that SLMs have unique endpoint deployment characteristics that make Layer 4 especially critical.</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/chapter3/s1/activity/index.xml" rel="self" type="application/rss+xml" />
  </channel>
</rss>