chapter twelve

12 Etcd and the control plane

 

This chapter covers

  • Comparing Etcd v3 with etcd v2
  • Looking at a "watch" in Kubernetes
  • Exploring the importance of Strict consistency in Kubernetes
  • Loadbalancing against ETCD nodes
  • Looking at Etcd’s security model in the Kubernetes context

As discussed in chapter 11, Etcd is a key value store with strong consistency guarantees, similar to Zookeeper (which is used for popular technologies like HBase and Kafka).

A Kubernetes cluster at its core consists of:

  • the Kubelet
  • the Scheduler
  • the Kubernetes Controller Manager
  • the APIServer

The former components all speak to one another by updating the APIServer. For example, if the Scheduler wants to run a pod on a specific node, it does so by modifying the Pod’s definition in the APIServer. If, during the process of starting a Pod the Kubelet needs to broadcast an event, it does so also by sending an APIServer message.

The Scheduler, Kubelet, and ControllerMAnager all intermediate communication through the APIServer, which makes them strongly decoupled. For example, the Scheduler doesn’t know how a Kubelet runs Pods, and the Kubelet doesn’t know how the APISErver schedules Pods.

12.1 Notes For the Impatient

12.1.1 Visualizing etcd performance with Prometheus

12.1.2 Knowing when to tune etcd

12.1.3 Example: A quick health check of etcd

12.1.4 Etcd v3 vs v2

12.2 Etcd As A Datastore

12.2.1 The Watch: Can you run Kubernetes on other databases?

12.2.2 Strict consistency

12.2.3 Fsync operations make ETCD consistent

12.3 Looking at the interface for Kubernetes to etcd

12.4 Etcds job is to keep the facts straight

12.4.1 The etcd writeahead log

12.4.2 Effect on Kubernetes

12.5 The CAP Theorem

12.6 Etcd Loadbalancing at the client level

12.6.1 Size Limitations: what (not) to worry about

12.7 Etcd Encryption at rest