chapter six

6 Guide to prompt engineering

 

This chapter covers

  • Basics of prompt engineering and core concepts
  • Various prompt engineering techniques, including image prompting
  • Understanding new threat vectors called prompt hijacking
  • Challenges and best practices for prompt engineering

Many of the Generative AI models we have seen are prompt-based – the large language models from OpenAI, text-to-image models, Stable Diffusion, and others. We interact with these models using a prompt, and at least in the base of LLMs, they respond with a prompt. Prompts are the main modality of “talking” to these models, which makes understanding and crafting these prompts quite important.

Prompt engineering is a new technique and is the process of optimizing the performance of generative AI through crafting tailored text, code, or image-based inputs on a certain task or set of tasks. Prompts are one of the key approaches to steer the models to the desired outcome. Effective, prompt engineering boosts the capabilities of generative AI and returns better results that are more relevant, accurate, and creative.

In this chapter, we will introduce the basic concepts of prompt engineering, detail different prompt techniques, and provide some practical examples and tips for applying them in an enterprise. We will also touch on tools like Prompt Flow from Azure AI that help prompt engineering. Let us start by understanding what prompt engineering is.

6.1 What is prompt engineering?

6.1.1 Why do we need prompt engineering?

6.2 Basics of Prompt Engineering

6.3 In-Context Learning and In-Context Prompting

6.4 Prompt engineering techniques

6.4.1 System Message

6.4.2 Zero-Shot, Few-Shot, and Many-Shot Learning

6.4.3 Use clear syntax

6.4.4 Making in-context Learning work

6.4.5 Reasoning – Chain of Thought (CoT)

6.4.6 Self-Consistency Sampling

6.5 Image prompting

6.6 Prompt Hijacking

6.7 Prompt Engineering Challenges

6.8 Best Practices

6.9 Summary

6.10 References