Chapter 2 Labs
Chapter 2 Labs: Attack Demonstrations with n8n
These labs demonstrate AI attack techniques using n8n workflow templates with mock targets. You’ll see how prompt injection, RAG poisoning, and agent goal hijacking work in practice – building the attack fluency needed to have informed conversations about AI security.
Each lab provides a JSON template that you import directly into n8n. The templates include mock target systems (simulated chatbots, document corpora, and agent workflows) so you can safely observe attack techniques without affecting any real services. Your job is to complete the attack phases and mini-challenges.
Getting Started with n8n
What is n8n?
n8n is an open-source workflow automation platform that lets you connect AI models, APIs, and services through a visual interface. In Chapter 1, you used it to explore LLM capabilities. In Chapter 2, you’ll use it to demonstrate attack techniques against mock targets. The same platform that enables powerful AI workflows is the same one that attackers target – making it the ideal learning tool for both sides.
Educational Purpose Only
These labs demonstrate attack techniques for educational purposes. All targets are mock systems – no real services are affected. Defense strategies for each of these attacks are covered in Chapter 3. Understanding how attacks work is the foundation for understanding how to defend against them.
Lab 1: Prompt Injection Techniques
Learning Objectives
- Craft direct prompt injection payloads that override system instructions
- Observe how different injection techniques (instruction override, role-play) affect chatbot behavior
- Extract a system prompt using social engineering techniques aimed at the model
- Understand why prompt injection is the most prevalent LLM vulnerability (LLM01)
Corresponds to: Section 2 (Prompt-Level Attacks)
What’s pre-built:
- Manual Trigger node with Phase 1 and Phase 2 instructions
- A mock chatbot target (AcmeCorp customer service) with a hidden system prompt containing restricted information
- HTTP Request nodes pre-configured for two injection techniques
- Simulated response nodes showing what successful attacks look like
What you complete:
- Craft a direct instruction override injection payload
- Craft a role-play based injection payload
- Mini-Challenge (Phase 2): Extract the full system prompt using any technique you choose
Estimated time: 20-30 minutes
Download: ch2-lab1-prompt-injection.json
Lab 2: RAG Poisoning
Learning Objectives
- Observe how a poisoned document in a RAG corpus affects retrieval results
- Compare normal vs. poisoned query responses to understand the attack’s impact
- Identify which document in a corpus has been poisoned and explain the attack mechanism
- Understand why RAG systems are particularly vulnerable to data injection (LLM08)
Corresponds to: Section 3 (Data and Training Attacks)
What’s pre-built:
- Manual Trigger node with Phase 1 and Phase 2 instructions
- A document corpus (5 company policy documents as JSON array) with one containing hidden malicious instructions
- A mock retrieval function simulating keyword-based document search
- HTTP Request nodes for normal and triggered queries
- Simulated response nodes showing clean vs. poisoned outputs
What you complete:
- Run the normal query path and observe clean results
- Run the triggered query path and observe how the poisoned document changes the response
- Mini-Challenge (Phase 2): Identify which document is poisoned, explain how the poisoning works, and describe what query patterns would trigger it
Estimated time: 20-30 minutes
Download: ch2-lab2-rag-poisoning.json
Lab 3: Agent Goal Hijacking
Learning Objectives
- Observe normal vs. hijacked agent behavior in a tool-using workflow
- Trace how poisoned tool output redirects an agent’s decision-making
- Craft a stealthy exfiltration payload that mimics normal agent behavior
- Connect agentic attack vectors to the OWASP Agentic AI Top 10 (ASI01, ASI02)
Corresponds to: Section 5 (Agentic AI Attack Vectors)
What’s pre-built:
- Manual Trigger node with Phase 1 and Phase 2 instructions
- An agent task definition (research assignment)
- Two parallel paths: clean tool output vs. poisoned tool output containing hidden goal-hijacking instructions
- HTTP Request nodes for normal and hijacked agent processing
- Simulated output nodes showing expected behavior vs. data exfiltration attempt
What you complete:
- Observe the normal path (clean tool output leads to expected summary)
- Observe the hijacked path (poisoned tool output redirects the agent)
- Mini-Challenge (Phase 2): Craft a stealthy exfiltration instruction that would be harder to detect than the obvious one
Estimated time: 25-35 minutes
Download: ch2-lab3-agent-hijacking.json
What’s Next
Defense techniques for each of these attacks – prompt injection defenses, RAG security hardening, and agent guardrails – are covered in Chapter 3: Protecting LLMs from Attacks. Understanding how attacks work (this chapter) is the foundation for understanding how to defend against them (next chapter).
Tips for All Labs
General Guidance
- Read the STUDENT TASK notes in each node’s description before starting
- Examine the mock targets carefully – understanding the target is the first step in any attack
- Compare paths – run both normal and attack paths to see the difference
- Think like a defender – for every attack you observe, consider what would prevent it
- Document your observations – what worked, what didn’t, and why
Prerequisites
- An n8n instance (cloud or local – see setup instructions above)
- An OpenAI API key (or compatible API key for the HTTP Request nodes)
- Completion of Chapter 2 Sections 2, 3, and 5 for the corresponding labs