Part 3. Context: Customizing LLMs for testing contexts

 

Throughout the previous chapters, we’ve seen how generalized prompts that are lacking in hints toward how our products work or what rules and expectations are in place return less valuable prompts. Although slicing our tasks down to a sensible size is key, providing that vital information to set clear boundaries on an LLM’s output can make or break a response. That’s why we’ll conclude the final part of the book with an exploration on embedding context into our work.

In the following chapters, we’ll depart a little from the techniques we’ve learned so far and explore different ways in which context can be retrieved and added to LLMs and prompts alike. This means dipping our toes into more advanced topics such as retrieval-augmented generation and fine-tuning, not to make us experts in these fields, but rather to appreciate how they work and how they can be utilized to get the most out of LLMs. So, let’s dive in and see what exciting options await us to take LLMs to the next level as testing assistants.