This chapter covers
- The fundamentals of how large language models work
- The risks of using large language models
- A definition of prompt engineering
- Experimenting with prompt engineering to return various outputs
- How to solve problems using prompt engineering
In the previous chapter, we learned that it’s important to take time and familiarize ourselves with new tools, and it’s the very same mindset we’ll be adopting in this chapter. Throughout this book, we’ll be exploring how to use generative AI tools such as Open AI’s ChatGPT and GitHub Copilot, which are built on large language models, or LLMs. There are many ways in which AI can be employed in testing, but what makes LLMs so interesting is their adaptability to different situations—hence, their rise in popularity. So, before we look at how we can incorporate LLM tools into our everyday testing, let’s first learn a bit about what LLMs are, how they work, and how to get the most out of them by learning about the concept of prompt engineering.
What has made LLMs such as ChatGPT dominate tech headlines throughout 2023? Consider this sample interaction with ChatGPT that I had: