6 Prompt engineering

 

This chapter covers

  • Basics of prompt engineering
  • Integrating external knowledge into prompts
  • Helping language models reason and act
  • Organizing the process of prompt engineering
  • Automating prompt optimization

Prompts bring language models (LMs) alive. Prompt engineering is a powerful technique to steer the behavior of models without updating their internal weights through expensive fine-tuning. Whether you’re a technical expert or working in a nontechnical role within an AI product team, mastering this skill is essential for using LMs. Prompt engineering allows you to start working with language models immediately, enabling quick exploration and enhancement of their capabilities without needing technical expertise. With well-designed prompts, you can make LMs perform specific tasks required by your application, delivering functionality customized to your users’ needs.

In this chapter, we’ll follow Alex again as he navigates the world of prompt engineering to improve the content generated by his app. He begins with simple zero-shot prompts and works through more advanced techniques, such as chain-of-thought (CoT) and reflection prompts. Each method taps into different cognitive abilities of LMs, from learning by analogy to breaking down complex problems into manageable parts. Figure 6.1 shows this progression.

Figure 6.1 Overview of the most popular prompting techniques
A diagram of a flowchart

AI-generated content may be incorrect.

6.1 Basics of prompt engineering

6.1.1 Zero-shot prompting

6.1.2 Structuring your prompt engineering with prompt components and templates

6.2 Few-shot prompting: Learning by analogy

6.2.1 Basics of few-shot prompting

6.2.2 Automating few-shot prompting

6.3 Injecting reasoning into language models

6.3.1 Chain-of-thought

6.3.2 Self-consistency

6.3.3 Reflection and iterative improvement

6.4 Best practices for prompt engineering

6.4.1 General guidelines

6.4.2 Systematizing the prompt engineering process

Summary