8 Node Feature Selection

 

This chapter covers:

  • Selecting nodes with specific hardware properties
  • Using taints and tolerations to prevent scheduling by default on special nodes
  • Keeping workloads separated on discrete nodes
  • Avoiding a single point of failure with a highly available deployment strategy
  • Targeting and avoiding specific groups of nodes for deployments

So far this book has treated the compute nodes in the cluster—the machines responsible for actually running your containers—as equal. Different Pods may request more or less CPU, but they’re all running on the same type of nodes under the hood.

One of the fundamental properties of cloud computing is that even when you’re using an abstract platform that takes care of much of the low-level compute provisioning for you (as Kubernetes platforms are capable of doing), you may still care to some extent about the servers that actually running your workloads. Serverless is nice, but at the end of the day, the workload is running on a computer, and you can’t always escape the properties of that machine, nor do you always want to.

8.1 Node Feature Selection

8.1.1 Node Selectors

8.1.2 Node Affinity and Anti-Affinity

8.1.3 Tainting Nodes to Prevent Scheduling by Default

8.1.4 Workload Separation

8.2 Placing Pods

8.2.1 Building Highly Available Deployments

8.2.2 Collocating Interdependent Pods

8.2.3 Avoiding Certain Pods

8.3 Debugging Placement Issues

8.4 Summary

sitemap