8 Node Feature Selection
This chapter covers:
- Selecting nodes with specific hardware properties
- Using taints and tolerations to prevent scheduling by default on special nodes
- Keeping workloads separated on discrete nodes
- Avoiding a single point of failure with a highly available deployment strategy
- Targeting and avoiding specific groups of nodes for deployments
So far this book has treated the compute nodes in the cluster—the machines responsible for actually running your containers—as equal. Different Pods may request more or less CPU, but they’re all running on the same type of nodes under the hood.
One of the fundamental properties of cloud computing is that even when you’re using an abstract platform that takes care of much of the low-level compute provisioning for you (as Kubernetes platforms are capable of doing), you may still care to some extent about the servers that actually running your workloads. Serverless is nice, but at the end of the day, the workload is running on a computer, and you can’t always escape the properties of that machine, nor do you always want to.