2 Generating Trustworthy Responses with Prompt Engineering
This chapter covers
- Tailoring the settings of LLMs for maximum reliability
- The foundations of prompt engineering for reliable LLMs
- Prompt engineering techniques to reduce hallucinations.
Prompting has become an important technique for effectively utilizing the capabilities of large language models (LLMs). Carefully designed prompts provide the context, instructions, and examples needed to guide LLM text generation for a variety of applications. Prompt engineering involves the iterative process of constructing, analyzing, and refining prompts to produce high-quality outputs from models like GPT, Claude and Gemini.
Prompt engineering is how you control LLM behavior. The right prompts guide models toward accurate outputs; the wrong ones cause hallucinations. This chapter covers the settings and techniques that separate unreliable demos from production systems. We'll build an e-commerce chatbot as our running example. This chapter also compiles prompt engineering research, techniques, and tools for building robust applications.