CUDA for Deep Learning cover
welcome to this free extract from
an online version of the Manning book.
to read more
or
welcome

Welcome

 

Welcome, and thank you for reading CUDA for Deep Learning in MEAP, the Manning Early Access Program. As an early access manuscript, you’ll notice some rough edges, and I hope you’ll give me feedback by leaving your comments online in liveBook. You’ll get the new chapters as quickly as I can write them!

Deep learning frameworks have made it remarkably easy to build and train powerful models, but that convenience often comes at the cost of insight. Kernels, memory movement, parallelism, and hardware features like tensor cores quietly determine whether a model runs efficiently or leaves enormous performance untapped. This book is about closing that gap: connecting the abstractions you use every day with the low-level mechanisms that actually execute your models.

I’ve written this book for readers who want to move beyond treating GPUs as a black box and instead understand—and control—the computations that power modern deep learning systems. You’ll get the most out of it if you’re comfortable programming in C/C++ and Python and have experience training or deploying neural networks. CUDA experience is optional; curiosity about performance is not.