6 Guide to prompt engineering

 

This chapter covers

  • Basics of prompt engineering and core concepts
  • Various prompt engineering techniques, including image prompting
  • New threat vectors called prompt hijacking
  • Challenges and best practices for prompt engineering

Many of the generative AI models described in previous chapters are prompt based—the large language models (LLMs) from OpenAI, text-to-image models, Stable Diffusion, and others. We interact with these models using a prompt, and at least at the base of LLMs, they respond with a prompt. Prompts are the main modality of talking to these models, which makes understanding and crafting prompts quite important.

Prompt engineering is a new technique that optimizes the performance of generative AI by crafting tailored text, code, or image-based inputs on a certain task or a set of them. Prompts are one key approach to steering the models toward the desired outcome. Effective prompt engineering boosts the capabilities of generative AI and returns better results that are more relevant, accurate, and creative.

This chapter introduces the basic concepts of prompt engineering and details different prompt techniques. It also provides practical examples and tips for immediate application in an enterprise setting. We will explore tools such as Prompt Flow from Azure AI that facilitate prompt engineering. Now let’s find out what prompt engineering is all about!

6.1 What is prompt engineering?

6.1.1 Why do we need prompt engineering?

6.2 The basics of prompt engineering

6.3 In-context learning and prompting

6.4 Prompt engineering techniques

6.4.1 System message

6.4.2 Zero-shot, few-shot, and many-shot learning

6.4.3 Use clear syntax

6.4.4 Making in-context learning work

6.4.5 Reasoning: Chain of Thought

6.4.6 Self-consistency sampling

6.5 Image prompting

6.6 Prompt injection

6.7 Prompt engineering challenges

6.8 Best practices

Summary