Build a Text-to-Image Generator (from Scratch) cover
welcome to this free extract from
an online version of the Manning book.
to read more
or
about-this-book

about this book

 

This book was written with one guiding principle: the best way to truly understand how something works is to build it from the ground up. Build a Text-to-Image Generator (from Scratch) takes this philosophy and applies it to one of the most exciting areas in AI today: text-to-image generation. Rather than treating modern AI systems as impenetrable black boxes, this book guides you step-by-step through the construction of the core components that make them work: transformers, vision models, diffusion processes, and multimodal architectures. By the end, you’ll not only know how to use state-of-the-art models, such as Stable Diffusion and DALL-E, but also how to re-create simplified versions of them yourself, giving you both practical skills and a deep conceptual foundation.

Who should read this book

This book is written for developers, researchers, students, and curious practitioners who want to move beyond simply running prebuilt AI models and instead learn how they are designed. You should have a solid command of Python and a working knowledge of machine learning, especially neural networks in PyTorch. A background in deep learning fundamentals, such as convolutional networks, embeddings, and training loops, will be helpful, though the book introduces each concept in context. If you’re an engineer seeking to deepen your AI skills, a researcher exploring multimodal learning, or simply an enthusiast who learns best by coding, this book is for you.

How this book is organized: A road map

About the code

Software and hardware requirements

How to use this book

liveBook discussion forum