This chapter covers
- Generating basic test data using LLMs
- Changing the format of test data
- Using complex data sets to prompt LLMs to create new data sets
- Integrating LLMs as a test data manager for automated checks
Managing test data is one of the most challenging aspects of testing and software development. Typically, data requirements grow with the complexity of a system. Having to synthesize data relevant to our context for automated checks and human-driven testing that handles complex data structures and anonymizes at scale and on demand can impose a huge drain on testing time and resources, which could be better spent on other testing activities.
However, we need test data. It is simply not possible to carry out most testing activities if we lack the necessary data to trigger actions and observe behavior. That’s why this chapter shows how we can use large language models (LLMs) to generate test data, providing different prompts to create both simple and complex data structures and integrate LLMs into our automation frameworks via third-party APIs.