Chapter 15. Infrastructure provisioning and deployment
This chapter covers
- Driving infrastructure provisioning from Gradle
- Automated deployment to different target environments
- Verifying the outcome of a deployment with smoke and acceptance tests
- Deployment as part of the build pipeline
Software deployments need to be repeatable and reliable. Any server outage inflicted by a faulty deployment—with the biggest hit on production systems—results in money lost for your organization. Automation is the next logical and necessary step toward formulating and streamlining the deployment process. In this chapter, we’ll talk about how to automate the deployment process with Gradle by the example of your To Do application.
Before any deployment can be conducted, the target environment needs to be preconfigured with the required software infrastructure. Historically, this has been the task of a system administrator, who would manually provision the physical server machine and install the software components before use. This setup can be defined as real code with tools like Puppet and Chef, checked into version control, and tested like an ordinary piece of software. Using this infrastructure-as-code approach helps prevent human error and minimizes the cycle time for spinning up a new environment based on the same software stack. While Gradle doesn’t provide native tooling for this task, you can bootstrap other tools to do the job for you.
Nowadays, we see a paradigm shift toward cloud provisioning of infrastructure. Unlike the traditional approach, a cloud provider often allocates preconfigured hardware in the form of virtual servers. Server virtualization is the partitioning of a physical server into smaller virtual servers to help maximize the server resources. Depending on the service offer, the operating system and software components are managed by the cloud provider.
In this section, we’ll talk about automating the creation of virtual servers and infrastructure software components with the help of third-party open source tools. These tools will help you set up and configure streamlined target environments for your To Do application. Later, you’ll learn how Gradle can integrate with these tools.
At the beginning of this chapter, you created a virtual machine on your developer machine. Though the virtual machine has a production-like setup, you use this environment solely for testing purposes. The test environment brings together code changes from multiple developers of the team. Therefore, it can be seen as the first integration point of running code. On the deployed application in the test environment, you can run automated acceptance tests to verify functional and nonfunctional requirements. The user acceptance test (UAT) environment typically exists for the purpose of exploratory, manual testing. Once the QA team considers the current state of the software code to be satisfactory, it’s ready to be shipped to production. The production environment directly serves to the end user and makes new features available to the world.
If you want to use the same code for deploying to all of these environments, you’ll need to be able to dynamically target each one of them at build time. Naturally, the test, UAT, and production environments run on different servers with potentially different ports and credentials. You could store the configuration as extra properties in your build script, but that would quickly convolute the logic of the file. Alternatively, you could store this information in a gradle.properties file. In both cases, you’d end up with a fairly unstructured list of properties. At build time, you’d have to pick a set of properties based on a naming convention. Doesn’t sound very flexible, does it? There’s a better way of storing and reading this configuration with the help of a standard Groovy feature.
I think we can agree that the task of deploying software to production should be a nonevent. Deployment automation is an important and necessary step toward this goal. The code used to automate the deployment process shouldn’t be developed and exercised against the production environment right away to reduce the risk of breakages. Instead, start testing it with a production-like environment on your local machine, or a test environment. You already set up such an environment with Vagrant. It uses infrastructure definitions that are fairly close to your production environment. Mimicking a production-like environment using a virtual machine for developing deployment code is cheap, easy to manage, and doesn’t disturb any other environment participating in your build pipeline. Once you’re happy with a working solution, the code should be used for deploying to the least-critical environment in your build pipeline. After gaining more confidence that the code is working as expected, you can deploy it to more mission-critical environments like UAT and production.
Writing deployment code is not a cookie-cutter job. It’s dependent on the type of software you write and the target environment you’re planning to deploy to. For example, a deployment of a web application to a Linux machine has different requirements than client-installed software running on Windows. At the time of writing, Gradle doesn’t offer a unified approach for deploying software. The approach we’ll discuss in this chapter is geared toward deploying your web application to a Tomcat container.
If for whatever reason a deployment failed, you want to know about it—fast. In the worst-case scenario, a failed deployment to the production environment, the customer shouldn’t be the first to tell you that the application is down. You absolutely need to avoid this situation because it destroys credibility and equates to money lost for your organization.
In addition to these fail-fast tests, automated acceptance tests verify that important features or use cases of the deployed application are correctly implemented. For this purpose, you can use the functional tests you wrote in chapter 7. Instead of running them on your developer machine against an embedded Jetty container, you’ll configure them to target other environments. Let’s first look at how to implement the most basic types of deployment tests: smoke tests.
In the previous chapters, we discussed the purpose and practical application of phases during the commit stage. We compiled the code, ran automated unit and integration tests, produced code quality metrics, assembled the binary artifact, and pushed it to a repository for later consumption. For a quick refresher on previously configured Jenkins jobs, please refer to earlier chapters.
While the commit stage asserts that the software works at the technical level, the acceptance stage verifies that it fulfills functional and nonfunctional requirements. To make this determination, you’ll need to retrieve the artifact from the binary repository and deploy it to a production-like test environment. You use smoke tests to make sure that the deployment was successful before a suite of automated acceptance tests is run to verify the application’s end user functionality. In later stages, you reuse the already downloaded artifact and deploy it to other environments: UAT for manual testing, and production environment to get the software into the hands of the end users.
To ensure repeatability for your deployments, the same code should be used across all environments. This means that the automation logic needs to use dynamic property values to target a particular environment. Environment-specific configuration becomes very readable when structured with closures and stored in a Groovy script. Groovy’s API class ConfigSlurper provides an easy-to-use mechanism for parsing these settings. To have the property values available for consumption across all projects of your build, you coded a task that reads the Groovy script during Gradle’s configuration lifecycle phase.
By the end of this chapter, you extended your build pipeline by manual and push-button deployment capabilities. In Jenkins, you set up three deployment jobs for targeting a test, UAT, and production environment, including their corresponding deployment tests. With these last steps completed, you built a fully functional, end-to-end build pipeline. Together, we explored the necessary tooling and methods that will enable you to implement your own build pipeline using Gradle and Jenkins.