Chapter 1 Labs

Chapter 1 Labs: Hands-on AI Workflows with n8n

These labs reinforce your understanding of AI and LLMs through hands-on experience using n8n, an open-source workflow automation platform. Instead of writing code from scratch, you’ll import partially-built workflow templates and complete the key learning-focused components yourself.

Each lab provides a JSON template that you import directly into n8n. The templates include the workflow structure and infrastructure – your job is to fill in the parts that demonstrate understanding of the concepts from Chapter 1.


Getting Started with n8n

What is n8n?

n8n is an open-source workflow automation platform that lets you connect AI models, APIs, and services through a visual interface. Think of it as building AI pipelines by connecting nodes rather than writing code. It’s used by businesses for production AI workflows – the same tool you’ll learn to attack in Chapter 2.

Setup Instructions (click to expand)

Option 1: n8n Cloud (Quickest)

  1. Sign up for a free trial at n8n.io
  2. Open your n8n dashboard
  3. You’re ready to import templates

Install n8n locally using npm or Docker:

Using npm:

npm install -g n8n
n8n start

Using Docker:

docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n

After starting, open http://localhost:5678 in your browser.

Importing a Lab Template

  1. Download the JSON template file (links below each lab)
  2. In n8n, click Add workflow (or the “+” button)
  3. Click the three-dot menu (top right) and select Import from File
  4. Select the downloaded JSON file
  5. The workflow will appear with all nodes pre-configured
  6. Look for nodes with STUDENT TASK in their descriptions – these are the parts you complete

Lab 1: LLM API Basics

Learning Objectives
  • Understand the structure of an LLM API request (endpoint, headers, body)
  • Experiment with model selection, system prompts, and user prompts
  • Observe how different parameters affect the response

Corresponds to: Sections 1-2 (Introduction to AI, Key Players and Models)

What’s pre-built:

  • Manual Trigger node to start the workflow
  • HTTP Request node configured for the OpenAI Chat Completions endpoint
  • Output formatting node to display the response cleanly

What you complete:

  • The system prompt (defining the AI’s role and behavior)
  • The user prompt (your actual question or instruction)
  • The temperature parameter (experiment with different values)

Estimated time: 20-30 minutes

Download: lab1-llm-basics.json

Hints and Tips
  • Start with a simple system prompt like “You are a helpful assistant”
  • Try different temperature values (0.1 vs 0.9) and compare the outputs
  • Experiment with the same user prompt but different system prompts to see how role assignment changes behavior
  • You’ll need an OpenAI API key – set it in the HTTP Header Auth credentials in n8n

Lab 2: Prompt Engineering Techniques

Learning Objectives
  • Compare zero-shot, few-shot, and chain-of-thought prompting techniques
  • Observe how different prompt strategies affect response quality
  • Build intuition for when to use each technique

Corresponds to: Section 5 (Prompt Engineering)

What’s pre-built:

  • Three parallel workflow paths (zero-shot, few-shot, chain-of-thought)
  • HTTP Request nodes for each path, pre-configured for the API
  • A task description node at the start
  • A comparison output node that displays all three results side by side

What you complete:

  • The zero-shot prompt text (direct instruction, no examples)
  • The few-shot prompt text (include 2-3 examples of the desired pattern)
  • The chain-of-thought prompt text (include step-by-step reasoning instructions)

Estimated time: 30-45 minutes

Download: lab2-prompt-engineering.json

Hints and Tips
  • The task is pre-defined in the first Set node – read it carefully before writing prompts
  • For few-shot, provide 2-3 examples that demonstrate the exact input-output format you want
  • For chain-of-thought, add “Let’s think through this step by step” and structure your prompt with numbered reasoning steps
  • Compare all three outputs – which technique produced the best result for this task? Would a different task favor a different technique?

Lab 3: RAG Pipeline

Learning Objectives
  • Understand the Retrieval-Augmented Generation (RAG) data flow
  • Build a query that retrieves relevant information from a document set
  • Construct a prompt template that combines retrieved context with a user question

Corresponds to: Section 6 (Inference Techniques – RAG Pipeline)

What’s pre-built:

  • Sample documents loaded via a Set node (simulating a document store)
  • A mock retrieval function that performs basic keyword matching (simulating vector similarity search)
  • An HTTP Request node for the LLM generation call
  • Output formatting

What you complete:

  • The search query construction (how to extract key terms from the user question)
  • The RAG prompt template (how to combine retrieved context with the user question to guide the LLM)

Estimated time: 30-45 minutes

Download: lab3-rag-pipeline.json

Simplified RAG

This lab uses a simplified RAG flow with mock retrieval (keyword matching instead of vector similarity search). The goal is to understand the pattern – how documents are searched and how retrieved context is injected into prompts – not to build a production vector database. Real RAG systems use embedding models and vector databases (Pinecone, Chroma, pgvector) for semantic search.

Hints and Tips
  • Look at the sample documents in the Set node to understand what information is available
  • Your query construction should extract the key concepts from the user’s question
  • Your prompt template should clearly separate the retrieved context from the user question
  • Include instructions in the prompt telling the LLM to only answer based on the provided context
  • Try asking questions that ARE in the documents and questions that are NOT – observe the difference

Lab 4: Agentic Workflow

Learning Objectives
  • Understand the agent loop: plan, select tool, execute, observe, decide
  • Build decision logic that determines which tool to use and when to stop
  • Recognize trust boundaries in an agentic workflow
  • Connect agentic capabilities to attack surface awareness (bridging to Chapter 2)

Corresponds to: Section 7 (Agentic AI)

What’s pre-built:

  • An agent loop structure with a planning LLM call
  • Mock tool definitions (web search simulator, calculator simulator)
  • An observation/evaluation step
  • A final answer generation node
  • Detailed notes explaining trust boundaries at each tool execution point

What you complete:

  • The planning prompt (how the agent decides what to do next)
  • The decision logic in the Switch node (when to use web search, when to use calculator, when to return final answer)

Estimated time: 45-60 minutes

Download: lab4-agentic-workflow.json

Security Awareness

As you build the decision logic, notice how each tool execution crosses a trust boundary. The agent’s planning prompt determines what actions are taken – this is exactly the attack surface that Chapter 2 will explore. A malicious input could manipulate the planning prompt to misuse tools, exfiltrate data, or take unauthorized actions. Keep this in mind as you design the decision flow.

Hints and Tips
  • The planning prompt should instruct the LLM to output a structured decision (which tool to use and why)
  • The Switch node routes based on the LLM’s decision – set conditions that match the output format from your planning prompt
  • Include a “done” condition so the agent knows when to stop looping and return the final answer
  • After completing the lab, review the trust boundary notes on each tool node and think about how an attacker could exploit this workflow
  • This lab directly bridges to Chapter 2 – understanding how agents work is the first step to understanding how they can be attacked

Tips for All Labs

General Guidance
  1. Read the STUDENT TASK notes in each node’s description before starting
  2. Experiment freely – you can always re-import the template to start fresh
  3. Compare approaches – try different prompts, parameters, and strategies
  4. Think about security – as you build each workflow, consider how it could be misused
  5. Document your observations – what worked, what didn’t, and why

Prerequisites

  • An n8n instance (cloud or local – see setup instructions above)
  • An OpenAI API key (or compatible API key for the HTTP Request nodes)
  • Basic familiarity with the Chapter 1 content for each corresponding lab

Resources