8 Node feature selection

 

This chapter covers

  • Selecting nodes with specific hardware properties
  • Using taints and tolerations to govern scheduling behavior on nodes with special hardware
  • Keeping workloads separated on discrete nodes
  • Avoiding a single point of failure with a highly available deployment strategy
  • Grouping some Pods together on a node while avoiding nodes that contain specific other Pods

So far, this book has treated the compute nodes in the cluster—the machines responsible for actually running your containers—as equal. Different Pods may request more or less CPU, but they’re all running on the same type of nodes under the hood.

One of the fundamental properties of cloud computing is that even when you’re using an abstract platform that takes care of much of the low-level compute provisioning for you as Kubernetes platforms are capable of doing, you may still care to some extent about the servers that are actually running your workloads. Serverless is a nice concept, but at the end of the day, the workload is running on a computer, and you can’t always escape the properties of that machine, nor do you always want to.

8.1 Node feature selection

8.1.1 Node selectors

8.1.2 Node affinity and anti-affinity

8.1.3 Tainting nodes to prevent scheduling by default

8.1.4 Workload separation

8.2 Placing Pods

8.2.1 Building highly available deployments

8.2.2 Co-locating interdependent Pods

8.2.3 Avoiding certain Pods

8.3 Debugging placement problems

8.3.1 Placement rules don’t appear to work

8.3.2 Pods are pending

Summary

sitemap