18 Deploying Kubernetes: Multinode and multiarchitecture clusters

 

You can do an awful lot with Kubernetes without understanding the architecture of the cluster and how all the pieces fit together—you already have in the previous 17 chapters. But that additional knowledge will help you understand what high availability looks like in Kubernetes and what you need to think about if you want to run your own cluster. The best way to learn about all of the Kubernetes components is to install a cluster from scratch, and that’s what you’ll do in this chapter. The exercises start with plain virtual machines and walk you through a basic Kubernetes setup for a multinode cluster, which you can use to run some of the sample apps you’re familiar with from the book.

Every app we’ve run so far has used Linux containers built for Intel 64-bit processors, but Kubernetes is a multiarchitecture platform. A single cluster can have nodes with different operating systems and different types of CPU, so you can run a variety of workloads. In this chapter, you’ll also add a Windows Server node to your cluster and run some Windows applications. That part is optional, but if you’re not a Windows user, it’s worth following through those exercises to see how Kubernetes uses the same modeling language for different architectures, with just a few tweaks to the manifests.

18.1 What’s inside a Kubernetes cluster?

18.2 Initializing the control plane

18.3 Adding nodes and running Linux workloads

18.4 Adding Windows nodes and running hybrid workloads

18.5 Understanding Kubernetes at scale

18.6 Lab