6 Productionizing ML Models

 

This chapter covers

  • Deploy ML models as a service using the BentoML deployment manager
  • Track data drift using Evidently

In the last chapter, we learnt how to orchestrate ML pipelines in Kubeflow. Kubeflow pipelines are a powerful tool that can help you build, scale and manage machine learning pipelines.

This chapter delves into the crucial post-training phases of a machine learning model's life cycle: deployment and monitoring. We explore how to efficiently serve models as APIs using BentoML, a powerful platform that simplifies the deployment process and reduces reliance on complex infrastructure setup. Additionally, we tackle the challenge of data drift – a common phenomenon that can degrade model performance over time. We introduce Evidently, a tool designed to detect and analyze data drift, enabling us to take corrective actions and maintain model accuracy in real-world scenarios.

Through practical examples and step-by-step guides, this chapter equips you with the knowledge and tools to confidently deploy and monitor your models, ensuring their long-term effectiveness and value.

6.1 Bento ML As A Deployment Platform

6.1.1 Building A Bento

6.1.2 Deploying a Bento

6.2 Evidently For Data Drift Monitoring

6.2.1 Data Drift Detection Report And Dashboard

6.2.2 Data drift detection Kubeflow pipeline component

6.2.3 Data drift detection for model deployed as API

6.3 Summary