2 Large language models and prompt engineering

 

This chapter covers

  • The fundamentals of how large language models work
  • The risks of using large language models
  • A definition of prompt engineering
  • Experimenting with prompt engineering to return various outputs
  • How to solve problems using prompt engineering

In the previous chapter, we learned that it’s important to take time and familiarize ourselves with new tools, and it’s the very same mindset we’ll be adopting in this chapter. Throughout this book, we’ll be exploring how to use generative AI tools such as Open AI’s ChatGPT and GitHub Copilot, which are built on large language models, or LLMs. There are many ways in which AI can be employed in testing, but what makes LLMs so interesting is their adaptability to different situations—hence, their rise in popularity. So, before we look at how we can incorporate LLM tools into our everyday testing, let’s first learn a bit about what LLMs are, how they work, and how to get the most out of them by learning about the concept of prompt engineering.

What has made LLMs such as ChatGPT dominate tech headlines throughout 2023? Consider this sample interaction with ChatGPT that I had:

2.1 LLMs explained

2.2 Avoiding the risks of using LLMs

2.2.1 Hallucinations

2.2.2 Data provenance

2.2.3 Data privacy

2.3 Improving results with prompt engineering

2.4 Examining the principles of prompt engineering

2.4.1 Principle 1: Write clear and specific instructions

2.4.2 Tactic 1: Use delimiters

2.4.3 Tactic 2: Ask for structured output

2.4.4 Tactic 3: Check for assumptions

2.4.5 Tactic 4: Few-shot prompting

2.4.6 Principle 2: Give the model time to “think”

2.4.7 Tactic 1: Specify the steps to complete the task

2.4.8 Tactic 2: Instruct the model to work out its own solution first

2.5 Working with various LLMs

2.5.1 Comparing LLMs

2.5.2 Examining popular LLMs

2.6 Creating a library of prompts

2.7 Solving problems by using prompts

Summary