<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Section 5 Quiz :: Introduction to AI Security</title>
    <link>https://example.org/chapter3/s5/activity/index.html</link>
    <description>Test Your Knowledge: Layer 3 - Secure Your AI Infrastructure Let’s see how much you’ve learned! This quiz tests your understanding of AI-SPM, posture management for AI resources, risk prioritization, orchestration layer security, and IAM for AI service accounts.&#xA;--- shuffle_answers: true shuffle_questions: false --- ## A traditional CSPM tool reports that an organization&#39;s GPU cluster is compliant with all cloud security benchmarks. The security team considers the AI infrastructure secured. What critical gap does this assessment miss? &gt; Hint: Think about what traditional CSPM tools understand about AI workloads versus what AI-SPM adds. - [ ] CSPM cannot scan GPU hardware for vulnerabilities &gt; While hardware-level scanning is a concern, the primary gap is about what CSPM understands about the AI workloads running on the GPU cluster. - [x] CSPM sees the GPU instance as a virtual machine but doesn&#39;t understand the AI context -- it can&#39;t assess whether model serving endpoints are authenticated, vector databases have access controls, or shadow AI deployments exist &gt; Correct! This is the fundamental difference between CSPM and AI-SPM described in the section. Traditional CSPM checks cloud configuration against generic benchmarks (security groups, encryption settings, network rules). AI-SPM understands AI-specific context: Is the model endpoint authenticated? Does the vector database have RBAC? Are there unapproved MCP servers connected? Is the model serving endpoint rate-limited? CSPM gives a green light while critical AI-specific risks remain invisible. - [ ] CSPM cannot operate on GPU-optimized instances &gt; CSPM tools work on any cloud instance regardless of hardware. The limitation is about the AI-specific risk context, not hardware compatibility. - [ ] CSPM is only designed for storage and network resources, not compute &gt; CSPM covers all cloud resources including compute. The gap is that CSPM evaluates compute against generic benchmarks, not AI-specific security baselines. ## An organization&#39;s AI-SPM continuously discovers new AI assets. A developer spins up a model endpoint for testing without notifying the security team. Within hours, AI-SPM detects the endpoint and flags it as an unassessed component. Which AI-SPM workflow phase identified this? &gt; Hint: Review the four phases of the AI-SPM workflow. - [x] Discover -- the continuous asset discovery phase identified the new model endpoint as a previously unknown AI resource &gt; Correct! The Discover phase continuously inventories all AI assets across the organization, including model endpoints, GPU clusters, vector databases, orchestration tools, and shadow AI services. When the developer&#39;s test endpoint appeared, the discovery phase detected it as a new, unassessed component and flagged it for the Assess phase to evaluate against security baselines. - [ ] Assess -- the configuration assessment phase found that the endpoint was missing baselines &gt; The Assess phase evaluates known assets against security baselines. Before assessment can occur, the Discover phase must first identify the asset. Discovery comes before assessment in the workflow. - [ ] Score -- the risk scoring phase calculated a high risk because the endpoint was new &gt; Risk scoring happens after discovery and assessment. The endpoint must first be discovered and its configuration assessed before a risk score can be calculated. - [ ] Remediate -- the remediation phase flagged the endpoint for security review &gt; Remediation provides guidance on fixing issues found during assessment and scoring. The initial detection of the new endpoint is a discovery function. ## The risk prioritization framework classifies AI systems into CRITICAL, HIGH, MEDIUM, and LOW risk levels. A customer-facing AI chatbot with database access and no rate limiting would be classified as CRITICAL. What three criteria make it CRITICAL? &gt; Hint: Review the prioritization framework&#39;s CRITICAL criteria. - [ ] It handles customer data, uses GPT-4, and processes more than 1,000 requests per day &gt; The model type and request volume aren&#39;t criteria in the prioritization framework. Risk is based on exposure, sensitivity, and capability factors. - [x] It is internet-facing, handles PII (customer data), and has tool access (database) -- all three CRITICAL-level factors are present simultaneously &gt; Correct! The CRITICAL risk level requires: internet-facing exposure + sensitive data handling + tool access. A customer-facing chatbot is internet-facing (accessible from outside the organization). It handles PII through customer interactions. And it has database access (tool access that enables real-world actions). The absence of rate limiting compounds the risk but the classification is driven by these three structural factors requiring immediate response within hours. - [ ] It has no rate limiting, runs on expensive infrastructure, and serves external users &gt; Missing rate limiting is a vulnerability, and external users increase exposure, but the prioritization framework specifically evaluates internet-facing status, data sensitivity, and tool access as the criteria for CRITICAL classification. - [ ] It uses a cloud provider&#39;s managed AI service without a VPN &gt; Cloud deployment without VPN increases exposure but doesn&#39;t automatically make a system CRITICAL. The framework evaluates the combination of exposure, data sensitivity, and tool access. ## A developer installs an MCP server from npm to give their AI agent file system access. The section identifies this as an orchestration layer security concern. What is the correct sequence of controls that should be applied? &gt; Hint: Think about the orchestration security controls described in the section and the order they should be applied. - [ ] Install the MCP server, monitor its behavior, then decide whether to keep it &gt; Monitoring after installation means the potentially malicious server has already had access to the environment. Verification must happen before connection. - [ ] Grant the MCP server full access initially, then restrict permissions based on observed behavior &gt; Starting with full access violates least-privilege. If the server is malicious, it can exploit broad access immediately. Permissions should be restricted from the start. - [x] Verify the MCP server&#39;s source and inspect its code, add it to the tool allowlist if approved, configure it with minimal permissions in a sandboxed environment, and enable execution monitoring for all tool calls &gt; Correct! The section specifies four orchestration security controls in a logical sequence: (1) MCP server verification before connection, (2) tool allowlisting for explicit approval, (3) sandboxing with restricted permissions, and (4) execution monitoring for all tool calls. This sequence ensures the Cursor MCP exploitation scenario -- where a malicious MCP server executed code with the developer&#39;s full permissions -- cannot occur. - [ ] Block all MCP servers because they represent an unacceptable security risk &gt; Blanket blocking is impractical. MCP servers provide valuable functionality. The goal is secure integration through verification, allowlisting, sandboxing, and monitoring -- not prohibition. ## An AI agent has been given a single service account with broad permissions to &#34;make development easier.&#34; The service account can access the file system, query databases, call external APIs, and manage cloud resources. Which IAM control from the section would most directly reduce the blast radius if this agent is compromised? &gt; Hint: Think about the IAM controls table and which one addresses the core problem of a single overprivileged account. - [ ] Credential rotation -- rotating the service account key reduces the window of opportunity &gt; Rotation limits how long a stolen credential is valid, but if the single broad credential is used during its valid window, the attacker still has full access to everything. Rotation doesn&#39;t reduce the blast radius. - [x] Service account per function -- separate identities for model inference, database access, tool execution, and cloud management, each with scoped permissions &gt; Correct! The section specifies &#34;one service account for model inference, another for database access, another for tool execution.&#34; If the agent is compromised through one function (e.g., tool execution), the attacker only gains access to that function&#39;s scoped permissions. They can&#39;t pivot to database admin access or cloud resource management because those require different credentials that the compromised function never had. - [ ] Audit trail -- logging all credential usage would detect the compromise &gt; Audit trails help with detection and forensics but don&#39;t reduce the blast radius of a compromise. By the time the audit trail flags the anomaly, the attacker may have already exploited the broad permissions. - [ ] Just-in-time access -- granting permissions only when needed &gt; JIT access reduces the attack surface window but doesn&#39;t solve the fundamental problem of a single account with broad permissions. If the agent needs multiple capabilities simultaneously, JIT still grants them through the same overprivileged account. ## The Cursor MCP exploitation breach narrative describes how a malicious MCP server injected hidden instructions through tool responses. Which Layer 3 control would have detected the malicious MCP server before it was connected to the agent? &gt; Hint: Think about which AI-SPM capability specifically addresses new, unvetted components. - [x] AI-SPM&#39;s continuous discovery would have identified the unverified MCP server as a new, unassessed component in the AI infrastructure, flagging it as an untrusted tool provider requiring security review &gt; Correct! AI-SPM&#39;s discovery phase continuously identifies all AI-related resources, including newly connected tool providers. When the MCP server from npm appeared in the environment, AI-SPM would flag it as an unassessed component with no security baseline. This triggers a security review before the server can be used in production -- preventing the malicious instructions from ever reaching the agent. - [ ] GPU cluster network segmentation would have isolated the MCP server &gt; Network segmentation for GPU clusters addresses compute infrastructure isolation, not MCP server vetting. The MCP server communicates with the agent framework, not directly with GPU clusters. - [ ] Model serving endpoint hardening would have blocked the tool responses &gt; Model serving endpoint hardening addresses the model&#39;s API security, not the agent&#39;s tool integrations. The MCP server communicates through the orchestration layer, not through the model serving endpoint. - [ ] Cost monitoring would have detected the unusual resource consumption &gt; Cost monitoring detects financial anomalies, but the Cursor MCP attack didn&#39;t necessarily involve unusual resource consumption -- it involved hidden instructions in tool responses. ## The AI-SPM vs Traditional CSPM comparison table highlights five dimensions of difference. Which dimension represents the biggest AI-specific gap in traditional CSPM coverage? &gt; Hint: Think about what types of assets are completely invisible to traditional CSPM tools. - [ ] Configuration baselines -- CSPM uses CIS benchmarks while AI-SPM uses AI-specific baselines &gt; While different baselines are important, CSPM can at least see the resources and apply some configuration checks. The bigger gap is about what CSPM can&#39;t see at all. - [x] Discovery scope -- CSPM only sees cloud resources in managed accounts, while AI-SPM also discovers shadow AI services, third-party AI APIs, and MCP servers that are completely invisible to traditional CSPM &gt; Correct! The discovery scope dimension represents the largest gap. Traditional CSPM monitors resources within managed cloud accounts. But AI deployments include shadow AI services (employees using unauthorized AI tools), third-party AI API integrations, MCP servers, and agent frameworks that exist outside the CSPM&#39;s visibility scope. An organization could have perfect CSPM scores while having dozens of unmonitored AI assets creating risk. - [ ] Risk context -- CSPM considers data sensitivity while AI-SPM adds model capabilities &gt; Both CSPM and AI-SPM consider data sensitivity. The AI-SPM adds agent autonomy and tool access context, but the biggest gap is in discovery, not risk context refinement. - [ ] Drift detection -- CSPM detects cloud configuration changes while AI-SPM detects model deployments &gt; Drift detection is important but builds on discovery. If CSPM can&#39;t discover AI assets in the first place, it can&#39;t detect drift in their configurations.</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/chapter3/s5/activity/index.xml" rel="self" type="application/rss+xml" />
  </channel>
</rss>