8 Assisting exploratory testing with AI

 

This chapter covers

  • Enhancing exploratory testing charter creation using LLMs
  • Identifying opportunities to use LLMs in exploratory testing sessions
  • Using LLMs to support various activities during exploratory testing sessions
  • Summarizing exploratory testing session reports with LLMs

So far, throughout this book, we’ve explored how LLMs can help us with a range of testing activities and artifacts that are algorithmic. Activities such as code and data generation have distinct rules in their syntax and formatting and come with a level of repeatability that works well with LLMs. But what about the more heuristic-based testing activities, such as exploratory testing? How can LLMs help support us when we are in the process of executing testing ourselves? It’s important to reiterate that LLMs cannot replace testing or testers, but with careful observation of what we’re doing during exploratory testing and knowledge of prompt engineering, we can selectively enhance our exploring in a way that doesn’t undermine the core value of exploratory testing. To do this we’ll explore three aspects of exploratory testing and how LLMs can help: organizing exploratory testing with charters, performing exploratory testing and reporting what we’ve discovered.

8.1 Organizing exploratory testing with LLMs

8.1.1 Augmenting identified risks with LLMs

8.1.2 Augmenting charter lists with LLMs

8.2 Using LLMs during exploratory testing

8.2.1 Establishing an understanding

8.2.2 Creating data requirements for a session

8.2.3 Exploring and investigating bugs

8.2.4 Using LLMs to assist exploratory testing

8.3 Summarizing testing notes with LLMs

8.4 Summary