9 Deployment with containers and schedulers

 

This chapter covers

  • Using containers to package a microservice into a deployable artifact
  • How to run a microservice on Kubernetes, a container scheduler
  • Core Kubernetes concepts, including pods, services, and replica sets
  • Performing canary deployments and rollbacks on Kubernetes

Containers are an elegant abstraction for deploying and running microservices, offering consistent cross-language packaging, application-level isolation, and rapid startup time.

In turn, container schedulers provide a higher level deployment platform for containers by orchestrating and managing the execution of different workloads across a pool of underlying infrastructure resources. Schedulers also provide (or tightly integrate with) other tools — such as networking, service discovery, load balancing, and configuration management — to deliver a holistic environment for running service-based applications.

Containers aren’t a requirement for working with microservices. You can deploy services using many methods such as using the single service per VM model we outlined in the previous chapter. But together with a scheduler, containers provide a particularly elegant and flexible approach that meets our two deployment goals: speed and automation.

9.1 Containerizing a service

9.1.1 Working with images

9.1.2 Building your image

9.1.3 Running containers

9.1.4 Storing an image

9.2 Deploying to a cluster

9.2.1 Designing and running pods

9.2.2 Load balancing

9.2.3 A quick look under the hood

9.2.4 Health checks

9.2.5 Deploying a new version

9.2.6 Rolling back

9.2.7 Connecting multiple services

Summary