<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Chapter 1: Introduction to AI and LLMs :: Introduction to AI Security</title>
    <link>https://example.org/chapter1/index.html</link>
    <description>Is this Chapter for You? One of the core needs for technical professionals is to keep up with emerging technologies that are transforming how we build and deploy software. AI and LLMs represent a fundamental shift in what’s possible with code, but they come with their own concepts, terminology, and best practices – it can be overwhelming to know where to start!</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://example.org/chapter1/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>1. Introduction to AI and LLMs</title>
      <link>https://example.org/chapter1/s1/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s1/index.html</guid>
      <description>TL;DR Too long to read? Prefer to listen to this section? We got you covered! This is a version of this section as an audio podcast produced using Google’s NotebookLM.&#xA;Using AI to teach you AI, how meta!&#xA;Your browser does not support the audio element. Alternatively, if you feel like you know this already, try your hand at the optional quiz below and see how you do. Or you can just skip to the next section. We won’t judge you!</description>
    </item>
    <item>
      <title>2. Key Players and Models</title>
      <link>https://example.org/chapter1/s2/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s2/index.html</guid>
      <description>Introduction Now that we’ve explored the foundational architecture of large language models (LLMs) and the 2025/2026 AI landscape, let’s map the ecosystem of key players and models shaping this transformative technology. The field has evolved rapidly – new providers have emerged, reasoning models have become a distinct category, and the open-source ecosystem has fundamentally shifted the balance of power. Whether you’re considering commercial solutions, open-source options, or local deployments, understanding the ecosystem is essential for selecting the right tool for your needs.</description>
    </item>
    <item>
      <title>3. Deployment Considerations</title>
      <link>https://example.org/chapter1/s3/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s3/index.html</guid>
      <description>Introduction The AI landscape can be confusing when it comes to deployment choices, particularly because similar names often mask very different security and operational implications. For instance, when someone mentions “using GPT,” they might be referring to ChatGPT’s web interface, OpenAI’s API service, or Azure’s enterprise deployment – each with vastly different security profiles and use cases.</description>
    </item>
    <item>
      <title>4. Technical Foundations</title>
      <link>https://example.org/chapter1/s4/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s4/index.html</guid>
      <description>Introduction Now that we’ve explored how AI evolved into its current form, let’s lift the hood and examine the engine that powers large language models (LLMs). These systems are marvels of engineering, built on a foundation of interconnected components that work together to process and generate human-like text.&#xA;What will I get out of this? By the end of this section, you will be able to:</description>
    </item>
    <item>
      <title>5. Prompt Engineering</title>
      <link>https://example.org/chapter1/s5/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s5/index.html</guid>
      <description>Introduction At their core, LLMs work by responding to “prompts” – text inputs that tell the model what we want it to do. Think of a prompt as a conversation starter or instruction that guides the AI’s response. However, there’s more complexity to prompts than meets the eye, especially when working with different API types, managing conversations, and optimizing for the new generation of reasoning models.</description>
    </item>
    <item>
      <title>6. Inference Techniques</title>
      <link>https://example.org/chapter1/s6/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s6/index.html</guid>
      <description>Introduction Now that we’ve explored the fundamentals of LLMs, key players, deployment considerations, technical foundations, and the art of prompt engineering, it’s time to dive into how these models actually operate in real-world applications. This section examines the technical aspects of inference – the process where LLMs generate responses to our inputs – focusing on API integration patterns, response handling, knowledge integration with RAG, and cost optimization strategies.</description>
    </item>
    <item>
      <title>7. Agentic AI</title>
      <link>https://example.org/chapter1/s7/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/s7/index.html</guid>
      <description>Introduction Throughout this chapter, we’ve explored the foundations of AI systems, from understanding their core architectures to examining deployment strategies, technical underpinnings, crafting effective prompts, and implementing inference techniques. Now we turn to the most significant development in the 2025/2026 AI landscape: agentic AI is here, and it’s in production.&#xA;This isn’t speculation about a future technology. Software engineers use Claude Code and Cursor daily to write production code. Businesses run autonomous workflows through n8n and CrewAI. OpenClaw demonstrates open-source agentic capabilities. The shift from passive tools to proactive agents has already happened – and it fundamentally changes both what AI can do and what can go wrong.</description>
    </item>
    <item>
      <title>Chapter 1 Labs</title>
      <link>https://example.org/chapter1/labs/index.html</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://example.org/chapter1/labs/index.html</guid>
      <description>Chapter 1 Labs: Hands-on AI Workflows with n8n These labs reinforce your understanding of AI and LLMs through hands-on experience using n8n, an open-source workflow automation platform. Instead of writing code from scratch, you’ll import partially-built workflow templates and complete the key learning-focused components yourself.&#xA;Each lab provides a JSON template that you import directly into n8n. The templates include the workflow structure and infrastructure – your job is to fill in the parts that demonstrate understanding of the concepts from Chapter 1.</description>
    </item>
  </channel>
</rss>