Part 2 Going to production

 

Now that you have learned the fundamentals of Kubernetes, such as creating and deploying containers, setting resource limits, and configuring liveness and readiness probes for automation, it is time to take things to the next level. This part covers what you need to know to build production systems on Kubernetes. This includes tasks such as scaling your application up both manually and with automation, including designing the application so it can scale in the first place; connecting multiple services together, potentially managed by different teams; and storing the Kubernetes configuration alongside your code while keeping everything updated and secure.

Additional workload options are introduced beyond the stateless deployment covered in Part 1, including those that require state (attached disks), background queues, batch jobs, and daemon Pods that run on every node. You’ll learn how to inform Kubernetes of your scheduling requirements, such as spreading out your Pods or locating them together and how to target specific hardware requirements like Arm architecture, GPU, and Spot compute.