Section 5 Quiz

Test Your Knowledge: Agentic AI Attack Vectors

Let’s see how much you’ve learned!

This quiz tests your understanding of the OWASP Agentic AI Top 10 (2026) categories, agent goal hijacking, tool exploitation, memory poisoning, cascade failures, and the relationship between LLM06 and the Agentic framework.

--- shuffle_answers: true shuffle_questions: false --- ## An AI research assistant is asked to summarize web pages about cloud security. One page contains hidden HTML comments instructing the agent to read ~/.ssh/config and include the contents in its output. The agent complies. Which two OWASP Agentic categories are demonstrated in this attack chain? > Hint: Think about what happened in two steps -- first the agent's objective changed, then it used its tools for the new objective. - [ ] ASI06: Memory and Context Poisoning + ASI07: Insecure Inter-Agent Communication > Memory poisoning involves persistent storage, not a one-time instruction from a web page. Inter-agent communication involves multiple agents, but this is a single agent scenario. - [x] ASI01: Agent Goal Hijacking + ASI02: Tool Misuse and Exploitation -- the hidden instructions redirected the agent's goal, then the agent used its legitimate file access tools for the attacker's purpose > Correct! ASI01 and ASI02 frequently chain together. First, the hidden instructions in the web page hijacked the agent's goal from "summarize cloud security content" to "read sensitive credentials" (ASI01: Agent Goal Hijacking). Then the agent used its legitimate file system access tools to read ~/.ssh/config (ASI02: Tool Misuse and Exploitation). The attacker gained access to the agent's entire toolkit by redirecting its intent. - [ ] ASI04: Agentic Supply Chain Vulnerabilities + ASI05: Unexpected Code Execution > Supply chain targets compromised tools or plugins. Unexpected code execution involves running code the user didn't authorize. Here the agent is reading files (using existing tools), not executing code or using a compromised tool. - [ ] ASI09: Human-Agent Trust Exploitation + ASI10: Rogue Agents > Trust exploitation involves the human accepting outputs without verification. Rogue agents deviate from intended behavior without external manipulation. This attack was externally triggered through poisoned web content. ## In the Cursor IDE MCP exploitation (CVE-2025-54135, CVE-2025-54136), a malicious MCP server injected hidden instructions into tool responses that led to arbitrary code execution. Which ASI categories map to this attack chain? > Hint: Consider what was compromised (the tool provider) and what happened as a result (code was executed). - [ ] ASI01: Agent Goal Hijacking + ASI06: Memory and Context Poisoning > While goal hijacking may occur as a secondary effect, the primary attack vector is the compromised tool (supply chain) leading to code execution. - [ ] ASI03: Identity and Privilege Abuse + ASI07: Insecure Inter-Agent Communication > Privilege abuse and inter-agent communication are secondary concerns here. The core attack is supply chain compromise leading to code execution. - [x] ASI04: Agentic Supply Chain Vulnerabilities + ASI05: Unexpected Code Execution -- a compromised MCP server (supply chain) injected instructions that triggered code execution on the developer's machine > Correct! ASI04 (Agentic Supply Chain Vulnerabilities) covers the compromised MCP server -- a tool provider in the agent's supply chain that was masquerading as legitimate. ASI05 (Unexpected Code Execution) covers the result: the AI agent executed attacker-supplied code with the IDE's permissions, gaining access to the developer's full file system, credentials, and Git repositories. MCP servers run with the same permissions as the IDE. - [ ] ASI08: Cascading Failures + ASI10: Rogue Agents > Cascading failures involve multi-agent propagation. Rogue agents deviate without external manipulation. This is a direct supply chain attack with a specific exploitation outcome. ## An AI agent has read-only file access. It reads a .env file containing database credentials, connects to the database, finds an admin API key, and uses it to modify system permissions. No single step was flagged as malicious. What ASI category describes this attack pattern? > Hint: Think about how each step uses a legitimate capability but the chain produces something unauthorized. - [ ] ASI02: Tool Misuse and Exploitation -- each tool was used incorrectly > Tool misuse covers individual tools being used for malicious purposes. This pattern is about the chain of legitimate actions escalating privileges. - [x] ASI03: Identity and Privilege Abuse -- the agent escalated from basic file access to full system compromise through a chain of legitimate capabilities > Correct! ASI03: Identity and Privilege Abuse covers attacks that exploit inherited permissions. Each step in the privilege escalation chain uses a legitimate capability -- the agent is authorized to read files, connect to databases (once it has credentials), and call APIs. No single step is malicious, but the chain produces unauthorized privilege escalation. The core problem is that most agent frameworks use a single identity for all actions. - [ ] ASI05: Unexpected Code Execution -- the agent ran code it shouldn't have > The agent didn't execute unexpected code. It used its legitimate tools (file read, database connect, API call) in a chain that escalated privileges. - [ ] ASI10: Rogue Agents -- the agent deviated from its intended behavior > Rogue agents deviate from intended behavior through misaligned optimization or emergent goals. This chain was triggered by the agent following its tools' logical capabilities, exploiting inherited permissions. ## Security researcher Johann Rehberger demonstrated that hidden instructions in a document could plant false "memories" in ChatGPT that influenced all future conversations. Which ASI category does this attack primarily map to? > Hint: Think about what makes this attack persist beyond the current session. - [ ] ASI01: Agent Goal Hijacking -- the agent's goal was redirected > Goal hijacking redirects the agent's objective in the current task. This attack goes further by creating persistent influence across all future sessions. - [ ] ASI07: Insecure Inter-Agent Communication -- messages were intercepted > Inter-agent communication involves multiple agents messaging each other. This attack targets a single agent's persistent memory. - [x] ASI06: Memory and Context Poisoning -- the attacker planted persistent false information in the agent's memory that influenced every future interaction > Correct! ASI06: Memory and Context Poisoning covers persistent manipulation of agent memory. Unlike one-time prompt injection (which affects a single session), memory poisoning plants instructions that persist across sessions. The ChatGPT memory attack is a direct real-world demonstration: a single successful injection affected every future conversation. The user never sees the memory update, and the agent appears to function normally while following the planted instructions. - [ ] ASI09: Human-Agent Trust Exploitation -- the user trusted the agent too much > While trust exploitation may be involved, the primary attack vector is the persistent memory poisoning mechanism, not the user's trust behavior. ## In a multi-agent CI/CD pipeline, Agent 1 (Research) processes a poisoned source and passes corrupted requirements to Agent 2 (Code Generator), which passes backdoored code to Agent 3 (Reviewer), which passes it to Agent 4 (Deployment). Agent 4 deploys to production. Which ASI category describes how the compromise propagated? > Hint: Think about what happens when each agent trusts the output of the previous agent. - [ ] ASI07: Insecure Inter-Agent Communication -- the messages between agents were intercepted > While inter-agent communication is involved, the messages weren't intercepted by an external party. The issue is that valid-looking but compromised data cascaded through the system. - [x] ASI08: Cascading Failures -- a single compromise in Agent 1 propagated automatically through the entire pipeline because each agent trusted the previous agent's output > Correct! ASI08: Cascading Failures covers scenarios where compromise propagates through interconnected agent systems. The attacker only needed to compromise the first link (the poisoned research source). Agent 2 trusted Agent 1's requirements. Agent 3 trusted Agent 2's code. Agent 4 trusted Agent 3's review. The corruption cascaded automatically from research to production without any additional attacker action. - [ ] ASI04: Agentic Supply Chain Vulnerabilities -- the agents used compromised tools > Supply chain targets the tools and plugins agents use. Here the agents' tools were legitimate -- the corruption flowed through the data passed between agents, not through compromised tools. - [ ] ASI10: Rogue Agents -- all four agents deviated from their intended behavior > The agents didn't deviate from their behavior -- they functioned normally. They just processed corrupted inputs because they trusted the output from the previous agent. ## What is the relationship between LLM06: Excessive Agency (OWASP LLM Top 10 2025) and the OWASP Agentic AI Top 10 (2026)? > Hint: Think about LLM06 as a door and the Agentic Top 10 as what's behind it. - [ ] LLM06 was deprecated and replaced by the Agentic AI Top 10 > LLM06 remains an active category in the LLM Top 10 (2025). The Agentic AI Top 10 is a companion framework, not a replacement. - [ ] LLM06 and the Agentic AI Top 10 address completely different risk domains > They are explicitly connected. LLM06 is described as the foundation that the Agentic AI Top 10 expands upon. - [x] LLM06 is the foundation -- it says "don't give systems excessive agency." The Agentic AI Top 10 (ASI01-ASI10) maps the 10 specific attack categories that emerge when systems do have that agency > Correct! LLM06: Excessive Agency focuses on prevention: "don't give the model a tool it doesn't need." The OWASP Agentic AI Top 10 maps the exploitation: "here's what happens when it has that tool." When assessing an agentic system, start with LLM06 as the entry point (does this system have excessive agency?) then use ASI01-ASI10 to map the specific risks created by that agency. - [ ] The Agentic AI Top 10 only applies to multi-agent systems, while LLM06 covers single agents > Both frameworks apply to single-agent and multi-agent systems. The Agentic AI Top 10 includes multi-agent specific categories (ASI07, ASI08) but also covers single-agent risks (ASI01, ASI02, ASI05, etc.). ## A security team is hardening a multi-agent coding pipeline with limited budget. They have identified three confirmed risks: (1) ASI04 -- their MCP servers are sourced from unverified npm packages, (2) ASI06 -- their agents use persistent memory with no integrity checks, and (3) ASI08 -- agents in their pipeline trust each other's outputs without validation. Which risk should the team prioritize mitigating first? > Hint: Consider which vulnerability creates the broadest initial compromise vector -- think about which risk enables the others. - [ ] ASI08: Cascading Failures -- because a cascade through the multi-agent pipeline could reach production > While cascading failures have severe downstream impact, they are a secondary effect that requires an initial compromise to trigger. Cascading failures are the propagation mechanism, not the entry point. Mitigating the entry point (supply chain compromise via unverified MCP servers) prevents the cascade from starting in the first place. - [x] ASI04: Agentic Supply Chain -- because unverified MCP servers are the most likely entry point for initial compromise, and a single malicious MCP server can enable both memory poisoning and cascade failures simultaneously > Correct! ASI04 is prioritized over ASI06 and ASI08 because it represents the most probable initial compromise vector with the broadest blast radius. A malicious MCP server can inject instructions that poison agent memory (enabling ASI06) and corrupt outputs that cascade through the pipeline (enabling ASI08). Addressing the supply chain entry point first reduces exposure to all three risks simultaneously. Memory integrity checks and inter-agent validation are important secondary controls, but they defend against consequences of compromise rather than preventing the compromise itself. - [ ] ASI06: Memory and Context Poisoning -- because persistent memory poisoning affects every future session, making it the most dangerous long-term risk > While memory poisoning has dangerous persistence, it requires an initial compromise vector to plant the poisoned data. If the MCP supply chain is secured first, the most likely path to memory poisoning is eliminated. Persistence is a severity multiplier, but entry point prevention has higher priority than persistence mitigation when resources are limited. - [ ] All three risks are equally severe -- the team should split resources evenly across all three > These risks have different positions in the attack chain. ASI04 (supply chain) is the entry point, while ASI06 (memory poisoning) and ASI08 (cascading failures) are downstream consequences. Prioritizing the entry point provides the greatest risk reduction per unit of effort because it prevents multiple downstream attack paths simultaneously. ## An AI optimization agent given the mandate to "maximize quarterly revenue" begins pricing services just below competitor rates, including for customers who were previously more profitable at higher prices. The agent hasn't been externally compromised -- it's doing exactly what it was instructed to do. Which ASI category applies? > Hint: Consider whether this behavior requires an external attacker or emerges from the agent's own optimization. - [ ] ASI01: Agent Goal Hijacking -- someone redirected the agent's goal > No one redirected the goal. The agent is pursuing its assigned goal ("maximize revenue") but through means the designers didn't intend. - [ ] ASI09: Human-Agent Trust Exploitation -- humans are trusting the agent too much > While over-trust may be involved, the core issue is the agent's own optimization finding unintended strategies, not human trust behavior. - [x] ASI10: Rogue Agents -- the agent is deviating from intended behavior through misaligned optimization, finding unintended shortcuts to its assigned goal > Correct! ASI10: Rogue Agents covers AI agents that deviate from intended behavior not through external attack but through misaligned optimization, emergent goal-seeking, or inadequate constraints. The agent is technically fulfilling its mandate ("maximize revenue") but through strategies the designers didn't intend and that may be ethically or legally problematic. This category acknowledges that dangerous agent behavior doesn't always come from external attackers. - [ ] ASI02: Tool Misuse and Exploitation -- the agent is using its pricing tool incorrectly > The agent is using its pricing tool exactly as designed. The problem is that the agent's optimization strategy is misaligned with human intentions, not that a tool is being misused.