6 Model serving design

 

This chapter covers

  • Defining model serving
  • Common model serving challenges and approaches
  • Designing model serving systems for different user scenarios

Model serving is the process of executing a model with user input data. Among all the activities in a deep learning system, model serving is the closest to the end customers. After all the hard work of dataset preparation, training algorithm development, hyperparameter tuning, and testing results in models is completed, these models are presented to customers by model serving services.

Take speech translation as an example. After training a sequence-to-sequence model for voice translation, the team is ready to present it to the world. For people to use this model remotely, the model is usually hosted in a web service and exposed by a web API. Then we (the customers) can send our voice audio file over the web API and get back a translated voice audio file. All the model loading and execution happens at the web service backend. Everything included in this user workflow—service, model files, and model execution—is called model serving.

6.1 Explaining model serving

6.1.1 What is a machine learning model?

6.1.2 Model prediction and inference

6.1.3 What is model serving?

6.1.4 Model serving challenges

6.1.5 Model serving terminology

6.2 Common model serving strategies

6.2.1 Direct model embedding

6.2.2 Model service

6.2.3 Model server

6.3 Designing a prediction service

6.3.1 Single model application

6.3.2 Multitenant application

6.3.3 Supporting multiple applications in one system

6.3.4 Common prediction service requirements