Appendix B. Running RStudio Server on an EC2 GPU instance

 

This appendix provides a step-by-step guide to running deep learning in RStudio Server on an AWS GPU instance. This is the perfect setup for deep-learning research if you don’t have a GPU on your local machine. You should also consult https://tensorflow.rstudio.com/tools/cloud_gpu, which includes an always-up-to-date version of these instructions as well as details on other cloud GPU options.

B.1. Why use AWS for deep learning?

Many deep-learning applications are computationally intensive and can take hours or even days when running on a laptop’s CPU cores. Running on a GPU can speed up training and inference by a considerable factor (often 5 to 10 times, when going from a modern CPU to a single modern GPU). But you may not have access to a GPU on your local machine. Running RStudio Server on AWS gives you the same experience as running on your local machine, while allowing you to use one or several GPUs on AWS. And you only pay for what you use, which can compare favorably to investing in your own GPU(s) if you use deep learning only occasionally.

B.2. Why not use AWS for deep learning?

AWS GPU instances can quickly become expensive. The one we suggest using costs $0.90 per hour. This is fine for occasional use; but if you’re going to run experiments for several hours per day every day, then you’re better off building your own deep-learning machine with a TITAN X or GTX 1080 Ti.

B.3. Setting up an AWS GPU instance

B.4. Accessing RStudio Server

B.5. Installing Keras