2 Core components: Large Language Models, prompting, and agents
This chapter covers
- Understanding Large Language Models
- Controlling LLMs with Prompt Engineering
- Building an Agent with OpenAI Agents
- Enhancing Agents through Tool Integration
Now we roll up our sleeves and start building. In the last chapter, we provided a high-level overview of an agent, the autonomous thinking engine that can act on your behalf. In this chapter, we dive into the foundational components that make agents possible: the large language model that serves as the brain, the prompting techniques that shape its reasoning, and the OpenAI Agents SDK that orchestrates everything together. Please think of this as assembling your agent's core architecture, understanding how each piece works individually before we wire them into a cohesive system. By the end, you'll have hands-on experience with the building blocks that transform a language model from a helpful assistant into a capable agent ready to tackle real work.
2.1 Understanding Large Language Models
Large Language Models (LLMs) have become ubiquitous in AI. AI Agents powered with LLMs increasingly demonstrate the power of this combination, which we will use throughout this book.
While LLMs are often perceived as black boxes, and we may not entirely understand how they work, we know enough to unleash their power. In the next section, we will examine how LLMs will power our future agents and agentic systems.