This chapter covers:
- How to set up a SageMaker endpoint to serve predictions and decisions over the web
- How to build and deploy a serverless system to invoke the SageMaker endpoint
- How to send data to the endpoint and receive back predictions and decisions.
Until now, the machine learning models you built can be used only in SageMaker. If you wanted to provide a prediction or a decision to someone, you would have to submit the query from a Jupyter notebook running in SageMaker and send them the results. This, of course, is not what AWS intended for SageMaker. They intended that your users would be able to access predictions and decisions over the web. In this chapter you’ll enable your users to do just that.
In chapter 4, you helped Naomi identify which tweets should be escalated to her support team and which tweets could be handled by an automated bot.
One of the things you didn’t do for Naomi in chapter 4 is provide a way for her to send tweets to the machine learning model and receive back a decision as to whether the tweet should be escalated. In this chapter, you will rectify that.