Chapter 8. Serving predictions over the web

 

This chapter covers

  • Setting up SageMaker to serve predictions over the web
  • Building and deploying a serverless API to deliver SageMaker predictions
  • Sending data to the API and receiving predictions via a web browser

Until now, the machine learning models you built can be used only in SageMaker. If you wanted to provide a prediction or a decision for someone else, you would have to submit the query from a Jupyter notebook running in SageMaker and send them the results. This, of course, is not what AWS intended for SageMaker. They intended that your users would be able to access predictions and decisions over the web. In this chapter, you’ll enable your users to do just that.

Serving tweets

In chapter 4, you helped Naomi identify which tweets should be escalated to her support team and which tweets could be handled by an automated bot. One of the things you didn’t do for Naomi was provide a way for her to send tweets to the machine learning model and receive a decision as to whether a tweet should be escalated. In this chapter, you will rectify that.

8.1. Why is serving decisions and predictions over the web so difficult?

 
 

8.2. Overview of steps for this chapter

 

8.3. The SageMaker endpoint

 

8.4. Setting up the SageMaker endpoint

 

8.5. Setting up the serverless API endpoint

 
 

8.6. Creating the web endpoint

 
 
 
 

8.7. Serving decisions

 
 
 

Summary

 
 
sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage