7 Prompt Engineering: Becoming an LLM whisperer
This chapter covers
- What is a prompt and how do you make one?
- Prompt Engineering—more than just crafting a prompt.
- Prompt Engineering tooling available to make it all possible.
- Advanced prompting techniques to answer the hardest questions.
In the last chapter we discussed heavily how to deploy LLMs and before that, how to train them. In this chapter we are going to talk a bit about how to use them. We mentioned before that one of the biggest draws to LLMs is the fact that you don’t need to train them on every individual task. Large Language Models, especially the largest ones, have a deeper understanding of language which allows them to act as a general purpose tool.
Want to create a tutoring app that helps kids learn difficult concepts? What about a language translation app that helps bridge the gap between you and your in-laws? Need a cooking assistant to help you think up fun new recipes? With LLMs you no longer have to start from scratch for every single use case, you can use the same model for each of these problems. It just becomes a matter of how you prompt your model. This is where Prompt Engineering, also called in-context learning, comes in. In this chapter we are going to dive deep into the best ways to do that.