2 Executing prompts programmatically
This chapter covers
- Prompts and prompt engineering
- Different kinds of prompts and how they're structured
- Enhancing prompt responses using one, two, or a few-shot learning
- Examples of using prompts with ChatGPT and the OpenAI API
AI applications interact with LLMs mainly through prompts—structured inputs that guide the model’s behavior. It’s a bit like giving directions to a talented but inexperienced colleague: the clearer and more specific you are, the better the results. To get accurate and relevant outputs, prompts need to be carefully crafted and tailored to the task. In practice, prompt design is one of the biggest factors in how well your application performs.
Prompt engineering—the practice of designing and refining prompts to guide an LLM’s output—is a core skill in building LLM applications. You’ll spend much of your time creating, testing, and iterating on prompts to make sure your system delivers reliable, high-quality results.
In this chapter, you'll begin with the basics of prompt design and gradually move to more sophisticated techniques, such as Chain of Thought. LangChain's suite of prompt engineering tools, including PromptTemplate and FewShotPromptTemplate, will be your key resources as you learn to harness the full power of LLMs in your applications.