This chapter covers
- Setting up SageMaker to serve predictions over the web
- Building and deploying a serverless API to deliver SageMaker predictions
- Sending data to the API and receiving predictions via a web browser
Until now, the machine learning models you built can be used only in SageMaker. If you wanted to provide a prediction or a decision for someone else, you would have to submit the query from a Jupyter notebook running in SageMaker and send them the results. This, of course, is not what AWS intended for SageMaker. They intended that your users would be able to access predictions and decisions over the web. In this chapter, you’ll enable your users to do just that.
Serving tweets
In chapter 4, you helped Naomi identify which tweets should be escalated to her support team and which tweets could be handled by an automated bot. One of the things you didn’t do for Naomi was provide a way for her to send tweets to the machine learning model and receive a decision as to whether a tweet should be escalated. In this chapter, you will rectify that.