8 Assisting exploratory testing with AI
This chapter covers
- Enhancing exploratory testing charter creation using LLMs
- Identifying opportunities to use LLMs in exploratory testing sessions
- Using LLMs to support various activities during exploratory testing sessions
- Summarizing exploratory testing session reports with LLMs
So far, throughout this book, we’ve explored how LLMs can help us with a range of testing activities and artifacts that are algorithmic. Activities such as code and data generation have distinct rules in their syntax and formatting and come with a level of repeatability that works well with LLMs. But what about the more heuristic-based testing activities, such as exploratory testing? How can LLMs help support us when we are in the process of executing testing ourselves? It’s important to reiterate that LLMs cannot replace testing or testers, but with careful observation of what we’re doing during exploratory testing and knowledge of prompt engineering, we can selectively enhance our exploring in a way that doesn’t undermine the core value of exploratory testing. To do this we’ll explore three aspects of exploratory testing and how LLMs can help: organizing exploratory testing with charters, performing exploratory testing and reporting what we’ve discovered.