3 Enabling actions: Tool use
This chapter covers
- LLM limitations and why they need tools
- Tool calling and execution
- Building and integrating custom tools
- Tool abstraction with classes and decorators
- MCP for tool standardization
At this point, you understand how LLMs serve as the brain of AI agents, processing unstructured data, understanding user intent, and handling general tasks. However, on their own, LLMs can’t access external data or interact with external systems. They need tools and the ability to select and utilize those tools—what we call tool calling— which is central to implementing agents. A basic agent performs tasks step by step by repeatedly selecting the next action to take, which is precisely the role of tool calling.
Let’s turn our attention to this fundamental part of agentic architecture, as illustrated in figure 3.1. We’ll explore how to build tools and inform the LLM about how to use those tools. We’ll also learn how the LLM chooses the appropriate tool for a given task and how the result of that tool execution is relayed back to the LLM. Then we’ll examine the Model Context Protocol (MCP) by Anthropic, an initiative aimed at standardizing the development and use of tools.
Figure 3.1 Tools are the channels through which LLM Agents access external information and exert influence, and they are the units of action.
