How do we ensure that what we’re building is of good quality and is valuable to our end users? The challenge we face when delivering a high-quality product is the sheer number of complex actions and activities that occur in our work. If we want to make informed choices that lead to improved quality, we need to overcome this complexity and develop an understanding of both how our systems work and what our users want from our products. This is why we need to adopt a valuable testing strategy to help us better understand what we’re actually building. So before we begin our API testing journey, let’s first reflect on why software is so complicated and how testing can help.
highlight, annotate, and bookmark
You can automatically highlight by performing the text selection while keeping the alt/ key pressed.

In 2013, the UK government set out a digital strategy to move each department to a “Digital by Default Service Standard,” which included Her Majesty’s Revenue and Customs (also known as HMRC). HMRC’s goal was to bring all the UK tax services online to improve services and cut costs.
By 2017, the HMRC tax platform boasted more than 100 digital services, created by 60 delivery teams across five different delivery centers. Each of these digital services is supported by a platform of interconnected web APIs that were, and still are, constantly growing. The number of APIs created to support these services is dizzying. Even when I joined the project in 2015 and there were approximately half the services, teams, and delivery centers that there are now, the platform contained well over 100 web APIs. That number has undoubtedly increased since then, which begs the question: How does a project of this size and complexity deliver high-quality services to end users?
I mention the HMRC project because it helps highlight the following two “levels” of complexity that we face regularly when building web APIs:
- The complexity that exists within a web API
- The complexity of many web APIs working together in a platform
By understanding both of these categories, we can begin to appreciate why we need testing and how it can help.
It might seem a bit simple to start with this question: What is a web API? But if we take the time to dive into the makeup of a web API, we can discover not only what a web API is but also where its complexity lies. Take, for example, this visualization of a bookings web API that we’ll test later in this book, shown in figure 1.1.
Using this diagram, we can see that a web API works by receiving bookings in the form of HTTP requests from clients, which trigger different layers within the API to execute. Once the execution is complete and the booking has been stored, the web API responds via HTTP. But if we take a more granular step through the API, we start to get a sense of just how much is going on within a single web API.
First, the presentation layer receives a booking HTTP request and translates it into content that can be read by the other layers. Next, the service layer takes the booking information and applies business logic to it. (E.g., is it a valid booking, and does it conflict with other bookings?) Finally, if the processed booking needs to be stored, it is prepared for storage within the persistence layer and then stored within a database. If all of that is successful, each layer has to respond to the other to create the response the web API is going to send to whomever sent the request.
Each of these layers can be built in different ways, depending on our requirements and tastes. For example, we have the option to design web APIs using a range of approaches such as the REST architecture pattern, GraphQL, or SOAP, all of which have their own patterns and rules that require our understanding.
The services layer also contains our business logic, which, depending on our context, will have many specific custom rules to follow. A similar case applies to the persistence layers. Each of these layers relies on dependencies that have their own active development life cycles. We need to be aware of a vast amount of information to help us deliver high-quality work.
Understanding what is going on in our web APIs and how they help others is an exercise that requires time and expertise. Yes, we might be able to develop some level of understanding by testing parts individually (which I encourage teams to do; check out J. B. Rainsberger’s talk “Integrated Tests Are a Scam” to learn more: https://youtu.be/VDfX44fZoMc), but that knowledge gives us only a piece of the puzzle, not all of it.
Think about the HMRC platform with its more than 100 web APIs, mentioned earlier. How do we maintain an understanding of how each one works and how they relate to one another? Approaches such as microservice architecture help reduce the complexity within singular web APIs by making them smaller and more focused. But, on the other hand, they can lead to even more web APIs being added to a platform. How do we ensure that our knowledge of a platform of web APIs is up to date? And how do we keep up with how each API talks to others and confirm that their connections to each other are working within expected parameters?
To build a high-quality product, we have to make informed choices, which means our knowledge of how our web APIs work and how they relate to each other and our end users is vital. If we don’t make informed choices, we risk issues appearing in our products when we misinterpret how our systems work due to our lack of knowledge. It’s from this perspective that we can begin to appreciate how testing can help us establish and maintain that understanding.
discuss

If we’re going to be successful as a team with our testing, we require a shared understanding of the purpose and value of testing. Sadly, there are a lot of misconceptions about what testing is and what it offers, so to help us all get on the same page, let me introduce you to a model of testing that I use to better understand what testing is and how it helps, as shown in figure 1.2.
The model, based on one created by James Lyndsay in his paper “Why Exploration has a Place in any Strategy” (http://mng.bz/o2vd), comprises two circles. The left circle represents imagination, or what it is that we want in a product, and the right circle represents implementation, or what it is that we have in a product. The purpose of testing is to learn as much as possible about what’s going on in each of these circles by carrying out testing activities. The more we test in these two circles, the more we learn and the more we achieve the following:
- Discovering potential issues that might impact the quality
- Overlapping these two circles of information, ensuring that we understand what we are building and can be confident that it is the product or service we want to build
To examine this further, let’s look at an example in which a team is delivering a hypothetical search feature that we want to ensure is of a high degree of quality.
The imagination circle represents what we want from our product, which includes expectations that are both explicit and implicit. In this circle, our testing is focused on learning as much as possible about those explicit and implicit expectations. By doing this, we learn not just what has been explicitly stated in writing or verbally shared, but we also dig down into the details and remove ambiguity over terms and ideas. For example, let’s say a representative of the business or a user, such as a product owner, has shared this requirement with their team: “Search results are to be ordered by relevance.”
The explicit information shared here tells us that the product owner wants search results, and they want them ordered by relevance. However, we can uncover a lot of implied information by testing the ideas and concepts behind what is being asked. This might come in the form of a series of questions we could ask, such as the following:
- What is meant by relevant results?
- Relevant to whom?
- What information is shared?
- How do we order by relevancy?
- What data should we use?
By asking these questions, we get a fuller picture of what is wanted, remove any misunderstandings in our team’s thinking, and identify potential risks that could impact those expectations. If we know more about what we are being asked to build, we‘re more likely to build the right thing the first time.
By testing the imagination, we get a stronger sense of what we are being asked to build. But just because we might know what to build doesn’t mean we will end up with a product that matches those expectations. This is why we also test the implementation to learn the following:
Both goals are of equal importance. We want to ensure that we have built the right thing, but side effects—such as unintended behavior, vulnerabilities, missed expectations, and downright weirdness that might appear in our products—will always exist. With our search results example, we could not only test that the feature delivers results in the relevant order, but we could also ask the product
- What if I enter different search terms?
- What if the relevant results don’t match the behavior of other search tools?
- What if part of the service is down when I search?
- What if I request results 1,000 times in less than 5 seconds?
- What happens if there are no results?
By exploring beyond our expectations, we become more aware of what is going on in our product, warts and all. This ensures that we don’t end up making incorrect assumptions about how our product behaves and releasing a poor-quality product. It also means that if we find unexpected behavior, we have the choice to attempt to remove or readjust our expectations.
The model of testing the imagination and implementation demonstrates that the testing goes beyond a simple confirmation of expectations and challenges our assumptions. The more we learn through testing about what we want to build and what we have built, the more these two circles align with one another. And the more they align, the more accurate our perception of quality becomes.
A team that is well informed about their work has a better idea of the quality of their product. They are also better equipped to decide what steps to take to improve quality, enabling us to decide to focus our attention on specific risks, make changes in our product to closer align with user’s expectations, or determine what issues we want to invest time in to fix and which to leave. This is the value of good testing: it helps teams get into a position where they can make these informed decisions and feel confident in the steps they are taking to develop a high-quality product.
I find this model to be an excellent way to describe the purpose and value of testing; however, it can feel somewhat abstract. How does this model apply to API testing? What would an API testing strategy look like using this approach? One of the goals of this book is to teach you exactly that. To help us better understand this model, let’s look at an example API testing strategy that could have been used for a different project than the HMRC project that I was part of.
The project was a service that allowed users to search and read regulatory documents as well as create reports on the back of said documents. The architecture of the system is summarized briefly in the model in figure 1.3.
Just for clarity, this is a stripped-down version of the application I worked on. But it gives us a sense of the types of applications we might work with if we’re tasked with creating a strategy for API testing. We’ll discuss this model further in chapter 2, but here it shows us that this application was made up of a series of web APIs that provided services to the UI and to each other. For example, the Search API could be queried by the UI, but it could also be queried by another API, such as the Report API. So, we have our example application, but how do we apply the testing model we learned about to this context? Once again, this can best be explained visually with the model shown in figure 1.4.
Figure 1.4 An instance of the testing model describes specific testing activities as part of an API test strategy

As we can see, both the imagination and implementation portions have been filled with a range of testing activities that can help us learn about how our web APIs work. On the imagination side, we have activities such as the following:
- Testing API designs—Allow us to question ideas and create a shared understanding around what problems we’re attempting to solve
- Contract testing—Supports teams in ensuring that their web APIs speak to each other and are updated correctly when changes occur
- Exploratory testing—Enables us to learn how our web APIs are behaving and discover potential issues
- Performance testing—Helps us to better understand how our web APIs behave when under load
And finally, we have automated API checks that cover the areas where our knowledge of what we want to build (imagination) and what we have built (implementation) overlap. These checks can confirm whether our knowledge of how our APIs work is still correct and bring to our attention any potential regression in quality.
We will learn more about these activities throughout this book, along with other testing activities. But this model demonstrates how different testing activities focus on different areas of our work and reveal different information. It also shows us that a successful testing strategy for APIs is holistic in its approach, a combination of many different activities all working together to help keep ourselves and our teams informed. To create this strategy, we need to do the following:
- Understand our context and its risks—Who are our users? What do they want? How does our product work? How do we work? What does quality mean to them?
- Appreciate the types of testing activities available to us—Do we know how to use automation effectively? Are we aware that we can test ideas and API designs before coding begins? How can we get value from testing in production?
- Use our context knowledge to pick the right testing activities—What risks matter the most to us, and what testing activities should we use to mitigate them?
This book will explore these three points to give you the necessary skills and knowledge to identify and deliver a testing strategy that works for you, your team, and your organization. As we progress through the book, we’ll use the testing model to first help us understand which testing activities work best where and then establish a testing strategy that works for us. Before we dive too deeply into the many API testing opportunities that are available, let’s first get comfortable with a few approaches that can help us rapidly learn about our web API platforms.
- Web APIs contain a range of layers. Each carries out complex tasks of its own that are made all the more complex when combined.
- Complexity scales even further when multiple web APIs work together to create services for an end user on a platform.
- Overcoming this complexity and understanding it are key to delivering a high-quality product.
- To establish understanding, we require a focused testing strategy.
- Testing can be thought of as focusing on two areas: imagination and implementation.
- We test imagination to learn more about what we want to build, and we test implementation to learn more about what we have built.
- The more we know about both the imagination and the implementation areas, the more the two overlap and the better informed we are about the quality of our work.
- The testing model can be used to show how different testing activities work in the imagination and implementation areas.
- A successful testing strategy will be made up of many testing activities that all work together to support a team.