Chapter 1 Labs
Chapter 1 Labs: Hands-on AI Workflows with n8n
These labs reinforce your understanding of AI and LLMs through hands-on experience using n8n, an open-source workflow automation platform. Instead of writing code from scratch, you’ll import partially-built workflow templates and complete the key learning-focused components yourself.
Each lab provides a JSON template that you import directly into n8n. The templates include the workflow structure and infrastructure – your job is to fill in the parts that demonstrate understanding of the concepts from Chapter 1.
Getting Started with n8n
What is n8n?
n8n is an open-source workflow automation platform that lets you connect AI models, APIs, and services through a visual interface. Think of it as building AI pipelines by connecting nodes rather than writing code. It’s used by businesses for production AI workflows – the same tool you’ll learn to attack in Chapter 2.
Lab 1: LLM API Basics
Learning Objectives
- Understand the structure of an LLM API request (endpoint, headers, body)
- Experiment with model selection, system prompts, and user prompts
- Observe how different parameters affect the response
Corresponds to: Sections 1-2 (Introduction to AI, Key Players and Models)
What’s pre-built:
- Manual Trigger node to start the workflow
- HTTP Request node configured for the OpenAI Chat Completions endpoint
- Output formatting node to display the response cleanly
What you complete:
- The system prompt (defining the AI’s role and behavior)
- The user prompt (your actual question or instruction)
- The temperature parameter (experiment with different values)
Estimated time: 20-30 minutes
Download: lab1-llm-basics.json
Lab 2: Prompt Engineering Techniques
Learning Objectives
- Compare zero-shot, few-shot, and chain-of-thought prompting techniques
- Observe how different prompt strategies affect response quality
- Build intuition for when to use each technique
Corresponds to: Section 5 (Prompt Engineering)
What’s pre-built:
- Three parallel workflow paths (zero-shot, few-shot, chain-of-thought)
- HTTP Request nodes for each path, pre-configured for the API
- A task description node at the start
- A comparison output node that displays all three results side by side
What you complete:
- The zero-shot prompt text (direct instruction, no examples)
- The few-shot prompt text (include 2-3 examples of the desired pattern)
- The chain-of-thought prompt text (include step-by-step reasoning instructions)
Estimated time: 30-45 minutes
Download: lab2-prompt-engineering.json
Lab 3: RAG Pipeline
Learning Objectives
- Understand the Retrieval-Augmented Generation (RAG) data flow
- Build a query that retrieves relevant information from a document set
- Construct a prompt template that combines retrieved context with a user question
Corresponds to: Section 6 (Inference Techniques – RAG Pipeline)
What’s pre-built:
- Sample documents loaded via a Set node (simulating a document store)
- A mock retrieval function that performs basic keyword matching (simulating vector similarity search)
- An HTTP Request node for the LLM generation call
- Output formatting
What you complete:
- The search query construction (how to extract key terms from the user question)
- The RAG prompt template (how to combine retrieved context with the user question to guide the LLM)
Estimated time: 30-45 minutes
Download: lab3-rag-pipeline.json
Simplified RAG
This lab uses a simplified RAG flow with mock retrieval (keyword matching instead of vector similarity search). The goal is to understand the pattern – how documents are searched and how retrieved context is injected into prompts – not to build a production vector database. Real RAG systems use embedding models and vector databases (Pinecone, Chroma, pgvector) for semantic search.
Lab 4: Agentic Workflow
Learning Objectives
- Understand the agent loop: plan, select tool, execute, observe, decide
- Build decision logic that determines which tool to use and when to stop
- Recognize trust boundaries in an agentic workflow
- Connect agentic capabilities to attack surface awareness (bridging to Chapter 2)
Corresponds to: Section 7 (Agentic AI)
What’s pre-built:
- An agent loop structure with a planning LLM call
- Mock tool definitions (web search simulator, calculator simulator)
- An observation/evaluation step
- A final answer generation node
- Detailed notes explaining trust boundaries at each tool execution point
What you complete:
- The planning prompt (how the agent decides what to do next)
- The decision logic in the Switch node (when to use web search, when to use calculator, when to return final answer)
Estimated time: 45-60 minutes
Download: lab4-agentic-workflow.json
Security Awareness
As you build the decision logic, notice how each tool execution crosses a trust boundary. The agent’s planning prompt determines what actions are taken – this is exactly the attack surface that Chapter 2 will explore. A malicious input could manipulate the planning prompt to misuse tools, exfiltrate data, or take unauthorized actions. Keep this in mind as you design the decision flow.
Tips for All Labs
General Guidance
- Read the STUDENT TASK notes in each node’s description before starting
- Experiment freely – you can always re-import the template to start fresh
- Compare approaches – try different prompts, parameters, and strategies
- Think about security – as you build each workflow, consider how it could be misused
- Document your observations – what worked, what didn’t, and why
Prerequisites
- An n8n instance (cloud or local – see setup instructions above)
- An OpenAI API key (or compatible API key for the HTTP Request nodes)
- Basic familiarity with the Chapter 1 content for each corresponding lab