9 Mastering agent prompts with prompt flow

 

This chapter covers

  • Understanding systematic prompt engineering and setting up your first prompt flow
  • Crafting an effective profile/persona prompt
  • Evaluating profiles: Rubrics and grounding
  • Grounding evaluation of a large language model profile
  • Comparing prompts: Getting the perfect profile

In this chapter, we delve into the Test Changes Systematically prompt engineering strategy. If you recall, we covered the grand strategies of the OpenAI prompt engineering framework in chapter 2. These strategies are instrumental in helping us build better prompts and, consequently, better agent profiles and personas. Understanding this role is key to our prompt engineering journey.

Test Changes Systematically is such a core facet of prompt engineering that Microsoft developed a tool around this strategy called prompt flow, described later in this chapter. Before getting to prompt flow, we need to understand why we need systemic prompt engineering.

9.1 Why we need systematic prompt engineering

Prompt engineering, by its nature, is an iterative process. When building a prompt, you’ll often iterate and evaluate. To see this concept in action, consider the simple application of prompt engineering to a ChatGPT question.

9.2 Understanding agent profiles and personas

9.3 Setting up your first prompt flow

9.3.1 Getting started

9.3.2 Creating profiles with Jinja2 templates

9.3.3 Deploying a prompt flow API

9.4 Evaluating profiles: Rubrics and grounding

9.5 Understanding rubrics and grounding

9.6 Grounding evaluation with an LLM profile

9.7 Comparing profiles: Getting the perfect profile

9.7.1 Parsing the LLM evaluation output

9.7.2 Running batch processing in prompt flow

9.7.3 Creating an evaluation flow for grounding

9.7.4 Exercises

Summary