10 Organizing a Kafka Project
This chapter covers
- Defining project requirements across environment setup, non-functional requirements, infrastructure sizing, and resource quotas.
- Maintaining Kafka cluster using tools, GitOps, and the Kafka Admin API.
- Testing Kafka applications.
Adopting Kafka succeeds or fails as much on process as on code. Teams often prototype and size infrastructure, yet overlook how Kafka fits the organization’s workflow—who owns events, how changes are approved, how reliability is proven. This chapter focuses on that gap. You’ll learn how to capture requirements and data contracts, how to maintain the cluster structure across environments, and how to test applications effectively. These practices reduce risk, prevent costly rework, and make performance, cost, and compliance predictable—turning a promising prototype into an operable, supportable system.
10.1 Defining Kafka Project Requirements
Projects start with requirements gathering. What should we analyze to make a Kafka—and event-driven—project successful? Are there Kafka-specific requirements we need to capture?
10.1.1 Field notes: Use-Case Intake and Requirements
Max sighed as he sat down, shaking his head.
Max: Alright, team. Our project is starting to attract a lot of attention. Every time I go for lunch, someone stops me and asks if they can also use Kafka for their use case. It’s like we’ve created a new buzzword in the company.