chapter nine

9 Managing Kafka within the enterprise

 

This chapter covers

  • Handling configuration, leadership assignments, and state coordination
  • Exploring Kafka deployment strategies: on-premises, cloud-based, and hybrid solutions
  • Best practices for authentication, authorization, encryption, and protecting data

When you get close to launching a prototype into production, it’s time to think about the concrete operational details. How will your system manage metadata and coordination so you can size and place controllers, anticipate behavior during incidents, and plan migrations from older architectures (such as ZooKeeper) to KRaft? Here is where another key actor in the Kafka ecosystem, the controller quorum, comes into play. Controllers manage metadata and ensure that clusters remain operational; they use quorum-based decisions to maintain fault tolerance. All of this needs to be configured.

Then there is the question of deployment. On-premises, cloud, and hybrid deployment models are all viable, and it’s important to compare them to match latency, cost, and operability.

Finally, you’ll want to make security actionable from end to end, including authentication (mTLS/SASL), authorization (ACLs), encryption in transit (TLS), data-at-rest protection, and even optional end-to-end encryption. We’ll look at all these operational details in this chapter.

9.1 Managing metadata

9.1.1 Introducing KRaft controllers

9.1.2 Example of cluster configuration

9.1.3 Failover scenarios

9.1.4 Using ZooKeeper

9.2 Choosing a deployment solution

9.2.1 Choosing between on-premises and cloud Kafka deployment

9.2.2 Hybrid approach

9.2.3 Choosing the right deployment for the Customer 360 ODS

9.3 Creating a security solution

9.3.1 Kafka security overview

9.3.2 Encrypting using TLS

9.3.3 Authentication

9.3.4 Authorization

9.3.5 Protecting data at rest

9.3.6 Enabling security in the Customer 360 ODS

9.4 Online resources

Summary