<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Section 11 Quiz :: Introduction to AI Security</title>
    <link>https://example.org/chapter3/s11/activity/index.html</link>
    <description>Test Your Knowledge: Building an AI Security Culture Let’s see how much you’ve learned! This quiz tests your understanding of AI red-teaming methodology, AI-specific incident response, the NIST AI Risk Management Framework, EU AI Act risk classifications, and organizational practices for AI security.&#xA;--- shuffle_answers: true shuffle_questions: false --- ## An AI red team discovers that a deployed chatbot can be jailbroken using a multi-turn escalation technique. The red team followed the Plan-Test-Document-Remediate-Retest methodology. During which phase should the specific jailbreak prompts and model responses be captured? &gt; Hint: Think about what is needed for the development team to reproduce and fix the vulnerability. - [ ] Plan -- the jailbreak technique should be described in the scope document &gt; The Plan phase defines scope, rules of engagement, and target systems. Specific jailbreak prompts and responses are captured during execution, not planning. - [ ] Test -- the red team should focus on attacking, not documenting &gt; While testing is when the attack is executed, separating testing from documentation risks losing important details about reproduction steps and evidence. - [x] Document -- all findings including reproduction steps, specific prompts used, model responses received, severity assessment, and evidence must be captured in detail &gt; Correct! The Document phase captures everything needed for the development team to reproduce and fix the vulnerability: the specific prompts that achieved jailbreaking, the model&#39;s responses at each step of the escalation, severity assessment, and evidence. Without detailed documentation, the remediation team cannot reproduce the issue, and the retest team cannot verify the fix. Documentation transforms a red team discovery into an actionable finding. - [ ] Retest -- documentation happens after the fix is verified &gt; Retest verifies that remediations work. Documentation of findings must happen before remediation begins -- the development team needs the documented findings to know what to fix. ## An AI system&#39;s behavioral anomaly detection alerts on a sudden change in model output quality. The incident response team suspects model compromise. What is the FIRST containment action they should take? &gt; Hint: Think about what stops the potential damage while preserving evidence. - [ ] Shut down the model endpoint entirely and begin forensic analysis &gt; Complete shutdown prevents damage but also disrupts service for all users. A more targeted containment preserves availability while stopping the potential compromise. - [x] Immediately route traffic to a known-good model version while preserving the compromised model&#39;s artifacts, logs, and configuration as forensic evidence &gt; Correct! Model compromise containment follows a parallel strategy: route production traffic to a known-good model version (maintaining service availability) while preserving everything about the compromised version (artifacts, logs, configuration) for forensic analysis. This stops potential damage from the compromised model while keeping the service running and maintaining evidence integrity. - [ ] Re-scan the model with AI Scanner to confirm the compromise &gt; Rescanning is important but should happen after initial containment. If the model is truly compromised, every response it generates during the scanning period could harm users. - [ ] Notify all users that the AI system may have been compromised &gt; User notification may be necessary, but it&#39;s not the first containment action. First stop the potential damage (switch to clean model), then investigate, then notify if user impact is confirmed. ## The NIST AI Risk Management Framework has four core functions: Govern, Map, Measure, and Manage. Which function most directly corresponds to AI Scanner&#39;s periodic assessment of model vulnerabilities? &gt; Hint: Think about what each NIST function does and what Scanner assessments produce. - [ ] Govern -- Scanner assessments are part of AI governance structure &gt; Govern establishes organizational AI governance structures, policies, and accountability. Scanner assessments are operational activities, not governance structures. - [ ] Map -- Scanner identifies AI risks in context &gt; Map identifies and understands AI risks at a strategic level. Scanner performs technical vulnerability assessment, which is more about quantifying specific risks than mapping them strategically. - [x] Measure -- Scanner quantitatively assesses AI risks through vulnerability testing, producing specific findings with severity ratings that measure the model&#39;s risk posture &gt; Correct! The Measure function is about assessing and monitoring AI risks quantitatively. AI Scanner produces specific, measurable vulnerability assessments: which attack categories the model is susceptible to, how severe each vulnerability is, and what the overall risk posture looks like. This maps directly to NIST&#39;s Measure function -- quantifying risk through systematic assessment. - [ ] Manage -- Scanner manages AI risks by blocking attacks &gt; Scanner assesses vulnerabilities; it doesn&#39;t block attacks (that&#39;s Guard&#39;s role). The Manage function corresponds more closely to deploying controls (Blueprint layers, Guard rules) that act on risks. ## A company develops a medical diagnosis AI system that will be deployed in EU markets. Under the EU AI Act, this system would be classified as which risk level? &gt; Hint: Consider which domains the EU AI Act classifies as high-risk. - [ ] Minimal risk -- medical AI is a standard AI application with no additional requirements &gt; Medical AI is far from minimal risk. Systems that affect health and safety are subject to the strictest applicable requirements. - [ ] Limited risk -- the system only needs to disclose that it uses AI &gt; Limited risk applies to chatbots and emotion recognition systems. Medical diagnosis AI has direct health impact, placing it in a higher risk category. - [x] High-risk -- medical diagnostics is a critical domain that affects health and safety, requiring conformity assessment, risk management, data governance, transparency, and human oversight &gt; Correct! The EU AI Act classifies AI systems used in medical diagnostics as high-risk because they directly affect human health and safety. High-risk systems must undergo conformity assessment, implement documented risk management (which the NIST AI RMF structure supports), ensure data governance (training data quality, representativeness), provide transparency (explainability), and enable meaningful human oversight. - [ ] Unacceptable -- medical AI is prohibited under the EU AI Act &gt; The EU AI Act prohibits specific practices like social scoring and real-time biometric surveillance, not medical AI systems. Medical AI is permitted but classified as high-risk with strict compliance requirements. ## The section describes an AI incident where an agent has been hijacked and is executing unauthorized tool calls. The incident response team needs to contain the agent. Which containment actions from the section should be taken, in order? &gt; Hint: Review the agent hijacking containment steps. - [ ] Investigate the root cause, then revoke credentials, then audit agent actions &gt; Root cause investigation happens after containment. The first priority is stopping the damage. - [ ] Notify affected users, suspend tool access, and roll back changes &gt; User notification should happen after containment and investigation confirm user impact. Suspension and rollback are containment actions but should happen in the correct order. - [x] Immediately suspend the agent&#39;s tool access, revoke all associated credentials, audit all actions taken during the compromise window, and review/roll back any changes the hijacked agent made &gt; Correct! The section specifies four containment actions in order: (1) suspend tool access immediately to stop ongoing unauthorized actions, (2) revoke all credentials associated with the agent to prevent re-exploitation, (3) audit all actions during the compromise window to understand the damage scope, and (4) review and potentially roll back changes (database modifications, file changes, communications) made by the hijacked agent. This sequence prioritizes stopping the damage (suspend/revoke) before assessing it (audit/rollback). - [ ] Roll back all agent actions first, then investigate whether the agent was actually compromised &gt; Rolling back actions before investigation could destroy forensic evidence. Suspension and credential revocation come first, then audit, then informed rollback decisions. ## The section argues that organizational practices are essential because &#34;the most sophisticated Blueprint layer is useless if the organization&#39;s culture doesn&#39;t support security practices.&#34; The Samsung data leak is cited as an example. What was the organizational failure in the Samsung case? &gt; Hint: Think about why the employees used ChatGPT despite the security risks. - [ ] Samsung didn&#39;t have technical DLP controls to block data exfiltration &gt; While DLP would have helped technically, the section identifies the root cause as cultural, not technical. The technology existed; the organizational practices didn&#39;t ensure it was used. - [ ] Samsung&#39;s security team was unaware of ChatGPT&#39;s existence &gt; ChatGPT was widely known. The issue wasn&#39;t security team awareness of the tool but the organizational response to employee AI adoption. - [x] The root cause was cultural -- engineers didn&#39;t understand the risk of sharing proprietary data with external AI services, and there was no governance framework (approved AI catalog, acceptable use policies, training) to channel their productivity needs into secure alternatives &gt; Correct! The section identifies that &#34;the root cause was cultural: engineers didn&#39;t understand the risk.&#34; Samsung&#39;s employees weren&#39;t acting maliciously -- they were trying to be more productive. The organizational failure was the absence of: (1) training that explained why sharing proprietary data with external AI is risky, (2) acceptable use policies defining what data can&#39;t be shared, and (3) an approved AI tool catalog with secure alternatives that met the same productivity need. - [ ] Samsung&#39;s API security was insufficient to protect their data &gt; The data wasn&#39;t leaked through API vulnerabilities. Employees voluntarily pasted proprietary data into ChatGPT&#39;s web interface. The failure was in organizational awareness and governance, not API security. ## The &#34;Bringing It All Together&#34; section describes a three-chapter arc: understand (Chapter 1), recognize threats (Chapter 2), build defenses (Chapter 3). Why does the section describe AI security as &#34;an ongoing discipline&#34; rather than a completed achievement? &gt; Hint: Think about the relationship between AI capabilities and AI threats over time. - [ ] Because new regulations will continuously change compliance requirements &gt; Regulatory changes are one factor, but the section&#39;s emphasis is on the dynamic nature of both AI capabilities and attack techniques. - [ ] Because organizations always have budget constraints that prevent complete implementation &gt; Budget constraints are a practical reality but not the reason AI security is described as an ongoing discipline. - [x] Because both AI capabilities and attack techniques evolve rapidly -- new model capabilities create new attack surfaces, new attack techniques require updated defenses, and the frameworks from this course provide the foundation for continuous adaptation &gt; Correct! AI security is ongoing because the landscape is dynamic in both directions: AI capabilities evolve (more powerful models, more autonomous agents, new deployment patterns), and attack techniques evolve (novel injection methods, new exploitation chains, AI-powered attacks). The Blueprint, LEARN, and Scanner/Guard continuous loop are designed for adaptation, not one-time deployment. Continuous learning, testing, and updating is the discipline the section describes. - [ ] Because human error can never be completely eliminated &gt; While human error is a persistent factor, the section&#39;s point about &#34;ongoing discipline&#34; encompasses the full dynamic of evolving capabilities, threats, and defenses.</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/chapter3/s11/activity/index.xml" rel="self" type="application/rss+xml" />
  </channel>
</rss>