preface

preface

 

One day, my team leader came to me with a simple question: “Hey, there’s a new messaging system out there. Can you check if it could be useful for us?” That was how my journey with Apache Kafka began.

At first glance, Kafka’s architecture felt clean and elegant. But implementing that first project was anything but seamless. It was too distributed, too low-level, and it lacked the tooling we take for granted today. I quickly learned that while Kafka had immense potential, it also demanded a deeper understanding than most systems I had worked with before.

The next chapter in my relationship with Kafka began when I was asked to create a course explaining its concepts. Teaching forced me to find clear ways to communicate the ideas behind event-driven architecture—not just how Kafka works, but how to think about it. And I discovered that most people weren’t as interested in the implementation details as they were in the bigger question: How can we incorporate Kafka into our project?

The Kafka community has done a great job of building documentation, but most tutorials stop at “how to get Kafka running.” Very few address the harder questions: Should we even use Kafka for this project? How can we fit it into our existing architecture? What patterns will help us design Kafka systems that stand the test of time? When I started, answers to these questions were scattered, incomplete, or hard-won through trial and error.