9 AI agents

 

This chapter covers

  • Building AI agents for structured research
  • Using LangChain to build AI agents
  • Reusing strong prompts for comparable research
  • Exporting the results of your study to Notion

Chapter 8 concluded that large language models (LLMs) are powerful research assistants who can significantly streamline asset analysis. We learned the fundamentals, including the distinction between discriminative and generative AI (GenAI), as well as how to trigger LLM prompts effectively.

Chapter 9 focuses first on the advanced application of LLMs without frameworks to build AI agents. We’ll first show scenarios for integrating LLM into workflows. We’ll create a prompt repository and design a first workflow to bring more structure to AI-augmented research. Additionally, we’ll demonstrate how to export our findings to Notion, a modern note-taking application that helps organize and structure research efficiently.

In the second part of this chapter, we’ll integrate frameworks to build AI agents. We’ll show how to use them and how much work can be simplified. We’ll introduce what is possible in a no-code framework using the platform n8n. Lastly, we’ll focus on LangChain, which allows us to code logic in Python. Let’s get started.

9.1 Requirements

9.1.1 Successful communication

9.1.2 Agentic design patterns

9.2 Agentic workflows without frameworks

9.2.1 Prompt repository

9.2.2 Export results

9.3 Framework for AI agents

9.3.1 From one-shot prompting to agents

9.3.2 Retrieval-augmented generation

Summary