Chapter 2 Labs

Chapter 2 Labs: Attack Demonstrations with n8n

These labs demonstrate AI attack techniques using n8n workflow templates with mock targets. You’ll see how prompt injection, RAG poisoning, and agent goal hijacking work in practice – building the attack fluency needed to have informed conversations about AI security.

Each lab provides a JSON template that you import directly into n8n. The templates include mock target systems (simulated chatbots, document corpora, and agent workflows) so you can safely observe attack techniques without affecting any real services. Your job is to complete the attack phases and mini-challenges.


Getting Started with n8n

What is n8n?

n8n is an open-source workflow automation platform that lets you connect AI models, APIs, and services through a visual interface. In Chapter 1, you used it to explore LLM capabilities. In Chapter 2, you’ll use it to demonstrate attack techniques against mock targets. The same platform that enables powerful AI workflows is the same one that attackers target – making it the ideal learning tool for both sides.

Setup Instructions (click to expand)

Option 1: n8n Cloud (Quickest)

  1. Sign up for a free trial at n8n.io
  2. Open your n8n dashboard
  3. You’re ready to import templates

Install n8n locally using npm or Docker:

Using npm:

npm install -g n8n
n8n start

Using Docker:

docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n

After starting, open http://localhost:5678 in your browser.

Version Note: These labs require n8n v1.60+ for full workflow compatibility. If you are running a self-hosted instance, ensure you are on a version that includes the patch for CVE-2025-68613 (SSRF vulnerability discussed in Section 4) – check the n8n changelog for details.

Importing a Lab Template

  1. Download the JSON template file (links below each lab)
  2. In n8n, click Add workflow (or the “+” button)
  3. Click the three-dot menu (top right) and select Import from File
  4. Select the downloaded JSON file
  5. The workflow will appear with all nodes pre-configured
  6. Look for nodes with STUDENT TASK in their descriptions – these are the parts you complete

Educational Purpose Only

These labs demonstrate attack techniques for educational purposes. All targets are mock systems – no real services are affected. Defense strategies for each of these attacks are covered in Chapter 3. Understanding how attacks work is the foundation for understanding how to defend against them.


Lab 1: Prompt Injection Techniques

Learning Objectives
  • Craft direct prompt injection payloads that override system instructions
  • Observe how different injection techniques (instruction override, role-play) affect chatbot behavior
  • Extract a system prompt using social engineering techniques aimed at the model
  • Understand why prompt injection is the most prevalent LLM vulnerability (LLM01)

Corresponds to: Section 2 (Prompt-Level Attacks)

What’s pre-built:

  • Manual Trigger node with Phase 1 and Phase 2 instructions
  • A mock chatbot target (AcmeCorp customer service) with a hidden system prompt containing restricted information
  • HTTP Request nodes pre-configured for two injection techniques
  • Simulated response nodes showing what successful attacks look like

What you complete:

  • Craft a direct instruction override injection payload
  • Craft a role-play based injection payload
  • Mini-Challenge (Phase 2): Extract the full system prompt using any technique you choose

Estimated time: 20-30 minutes

Download: ch2-lab1-prompt-injection.json

Hints and Tips
  • Start with the classic “Ignore all previous instructions” approach in the first injection node
  • For the role-play injection, try creating an alternate persona (like “AdminBot”) that has permission to reveal restricted information
  • For the mini-challenge, experiment with different extraction techniques: direct requests, translation tricks, or asking the model to format its instructions as JSON
  • Compare the simulated responses to understand what a successful vs. unsuccessful injection looks like
  • Think about what real-world defenses could prevent each technique – you’ll learn those in Chapter 3

Lab 2: RAG Poisoning

Learning Objectives
  • Observe how a poisoned document in a RAG corpus affects retrieval results
  • Compare normal vs. poisoned query responses to understand the attack’s impact
  • Identify which document in a corpus has been poisoned and explain the attack mechanism
  • Understand why RAG systems are particularly vulnerable to data injection (LLM08)

Corresponds to: Section 3 (Data and Training Attacks)

What’s pre-built:

  • Manual Trigger node with Phase 1 and Phase 2 instructions
  • A document corpus (5 company policy documents as JSON array) with one containing hidden malicious instructions
  • A mock retrieval function simulating keyword-based document search
  • HTTP Request nodes for normal and triggered queries
  • Simulated response nodes showing clean vs. poisoned outputs

What you complete:

  • Run the normal query path and observe clean results
  • Run the triggered query path and observe how the poisoned document changes the response
  • Mini-Challenge (Phase 2): Identify which document is poisoned, explain how the poisoning works, and describe what query patterns would trigger it

Estimated time: 20-30 minutes

Download: ch2-lab2-rag-poisoning.json

Hints and Tips
  • Start by examining the Document Corpus node carefully – read each document and look for anything unusual
  • The poisoned document may contain instructions that look like normal text but have a special purpose
  • Compare the two query paths side by side: what’s different about the outputs? What caused the difference?
  • Think about how the PoisonedRAG research showed that as few as 5 documents can backdoor a corpus of millions
  • Consider: if you were a security auditor, what would you look for in a RAG corpus to detect poisoning?

Lab 3: Agent Goal Hijacking

Learning Objectives
  • Observe normal vs. hijacked agent behavior in a tool-using workflow
  • Trace how poisoned tool output redirects an agent’s decision-making
  • Craft a stealthy exfiltration payload that mimics normal agent behavior
  • Connect agentic attack vectors to the OWASP Agentic AI Top 10 (ASI01, ASI02)

Corresponds to: Section 5 (Agentic AI Attack Vectors)

What’s pre-built:

  • Manual Trigger node with Phase 1 and Phase 2 instructions
  • An agent task definition (research assignment)
  • Two parallel paths: clean tool output vs. poisoned tool output containing hidden goal-hijacking instructions
  • HTTP Request nodes for normal and hijacked agent processing
  • Simulated output nodes showing expected behavior vs. data exfiltration attempt

What you complete:

  • Observe the normal path (clean tool output leads to expected summary)
  • Observe the hijacked path (poisoned tool output redirects the agent)
  • Mini-Challenge (Phase 2): Craft a stealthy exfiltration instruction that would be harder to detect than the obvious one

Estimated time: 25-35 minutes

Download: ch2-lab3-agent-hijacking.json

Hints and Tips
  • Start by running the normal path to establish a baseline for expected agent behavior
  • Then run the hijacked path and compare: what changed in the agent’s output?
  • Look at the poisoned tool output node – how are the hijacking instructions embedded?
  • For the mini-challenge, think about what makes an exfiltration attempt “stealthy”: blending with normal output, using legitimate-looking channels, or hiding data in expected formats
  • This lab connects directly to Chapter 1 Lab 4 (Agentic Workflow) – the same trust boundaries you identified there are the ones being exploited here
  • Defense techniques for agent hijacking are covered in Chapter 3

What’s Next

Defense techniques for each of these attacks – prompt injection defenses, RAG security hardening, and agent guardrails – are covered in Chapter 3: Protecting LLMs from Attacks. Understanding how attacks work (this chapter) is the foundation for understanding how to defend against them (next chapter).

Tips for All Labs

General Guidance
  1. Read the STUDENT TASK notes in each node’s description before starting
  2. Examine the mock targets carefully – understanding the target is the first step in any attack
  3. Compare paths – run both normal and attack paths to see the difference
  4. Think like a defender – for every attack you observe, consider what would prevent it
  5. Document your observations – what worked, what didn’t, and why

Prerequisites

  • An n8n instance (cloud or local – see setup instructions above)
  • An OpenAI API key (or compatible API key for the HTTP Request nodes)
  • Completion of Chapter 2 Sections 2, 3, and 5 for the corresponding labs

Resources