8 Assisting exploratory testing with artificial intelligence

 

This chapter covers

  • Enhancing exploratory testing charter creation using LLMs
  • Identifying opportunities for using LLMs in exploratory testing sessions
  • Using LLMs to support various activities during exploratory testing sessions
  • Summarizing exploratory testing session reports with LLMs

So far, we’ve explored how large language models (LLMs) can help us with a range of testing activities and artifacts that are algorithmic. Activities such as code and data generation have distinct syntax and formatting rules, and come with a certain degree of repeatability that works well with LLMs. But what about the more heuristic-based testing activities, such as exploratory testing? How can LLMs support us when we are executing testing ourselves? It’s important to repeat that LLMs cannot replace testing or testers, but by carefully observing what we’re doing during exploratory testing and knowledge of prompt engineering, we can selectively enhance our exploring in a way that doesn’t undermine the core value of exploratory testing. To do this, we’ll examine the following three aspects of exploratory testing and how LLMs can help: organizing exploratory testing with charters, performing exploratory testing, and reporting what we’ve discovered.

8.1 Organizing exploratory testing with LLMs

8.1.1 Augmenting identified risks with LLMs

8.1.2 Augmenting charter lists with LLMs

8.2 Using LLMs during exploratory testing

8.2.1 Establishing an understanding

8.2.2 Creating data requirements for a session

8.2.3 Exploring and investigating bugs

8.2.4 Using LLMs to assist exploratory testing

8.3 Summarizing testing notes with LLMs

Summary