3 Deploying to Kubernetes
This chapter covers:
- Deploying a containerized application to Kubernetes on a public cloud and exposing it to the Internet
- Understanding the Kubernetes terms and concepts related to specifying and hosting application deployments
- Updating deployments with a new version of the application
- Running a version of Kubernetes locally for testing and development
In chapter 2, we covered how to containerize your application. If you stopped there, you would have a portable, reproducible environment for your app, not to mention a convenient developer setup, but you may have trouble scaling that app when you go to production.
If your deployment model looks incredibly simple, like one container per VM then you’re more or less set. You can deploy that container directly into a system that simply scales the VMs as needed, with an instance of your container running on each. In such a setup, your container is logically acting like a VM image, just with better tooling, and more convenient packaging. For most, the world is more complex than that.
Running more than a single container on a machine is typically done with what the industry calls a container orchestrator, which is a fancy way of saying tooling that schedules and monitors your containers on your machines. A good such system can pool the resources of your machines, work out how to fit the containers into the desired capacity, and monitor those containers for issues like a crash or hung process.