chapter ten

10 Organizing a Kafka project

 

This chapter covers

  • Defining project requirements across environment setups, nonfunctional requirements, infrastructure sizing, and resource quotas
  • Maintaining Kafka clusters using CLI and UI tools, GitOps, and the Kafka Admin API
  • Testing Kafka applications

A successful Kafka adoption depends as much on process as on code. Teams often prototype and size infrastructure yet overlook how Kafka fits the organization’s workflow—who owns events, how changes are approved, how reliability is proven. This chapter focuses on that gap. You’ll learn how to capture requirements and data contracts, how to maintain the cluster structure across environments, and how to test applications effectively. These practices reduce risk, prevent costly rework, and make performance, cost, and compliance predictable—turning a promising proto­type into an operable, supportable system. Without this structure, a promising Kafka platform could quickly spiral into an unmanageable mess.

10.1 Defining Kafka project requirements

10.1.1 Identifying event-driven workflows

10.1.2 Turning business workflows into events

10.1.3 Gathering functional requirements for Kafka topics

10.1.4 Identifying nonfunctional requirements

10.2 Maintaining cluster structure

10.2.1 Using CLI and UI tools

10.2.2 Using GitOps for Kafka configurations

10.2.3 Using the Kafka Admin API

10.2.4 Setting up environments

10.2.5 Choosing a solution for the Customer 360 ODS

10.3 Testing Kafka applications

10.3.1 Unit testing

10.3.2 Integration testing

10.3.3 Performance tests

10.4 Online resources

Summary