chapter three

3 Building applications on Kubernetes

 

This chapter covers

  • Setting up the infrastructure backbone of your ML platform
  • Containerizing applications with Docker
  • Orchestrating deployments with Kubernetes
  • Automating builds and deployments
  • Implementing monitoring for production applications

As an ML Engineer, one of your primary responsibilities is to build and maintain the infrastructure that powers machine learning systems. Whether you're deploying models, setting up pipelines, or managing a complete ML platform, you need a solid foundation in modern infrastructure tools and practices (Figure 3.1).

Figure 3.1 The mental map now shifts focus to the foundation of the ML platform, primarily Kubernetes, along with key practices like CI/CD and monitoring, which are essential for deploying and maintaining ML systems.

We'll tackle the essential DevOps tools and practices you need to build reliable ML systems. We'll start with the basics and progressively build your knowledge through hands-on examples. By the end, you'll understand how to:

  • Package applications consistently with Docker
  • Deploy and manage applications on Kubernetes
  • Automate workflows with CI/CD
  • Monitor application health and performance

While these tools aren't specific to machine learning, they form the foundation that enables us to build robust ML systems at scale. Let's begin with Docker, the tool that helps us package our applications consistently.

3.1 Containers and tooling

3.2 Docker

3.2.1 Write application code

3.2.2 Write Dockerfile

3.2.3 Building And Pushing Docker Image

3.3 Kubernetes

3.3.1 Kubernetes Architecture Overview

3.3.2 Kubectl

3.3.3 Kubernetes Objects

3.3.4 Networking And Services

3.3.5 Other Objects

3.3.6 Helm charts

3.3.7 Conclusion

3.4 Continuous Integration And Deployment

3.4.1 Gitlab CI

3.4.2 Argo CD

3.5 Prometheus And Grafana

3.6 Summary