4 Kafka clients

 

This chapter covers

  • Producing records with KafkaProducer
  • Understanding message delivery semantics
  • Consuming records with KafkaConsumer
  • Learning about Kafka’s exactly-once streaming
  • Using the Admin API for programmatic topic management
  • Handling multiple event types in a single topic

This chapter is where the “rubber hits the road.” We take what you’ve learned over the previous two chapters and apply it here to start building event streaming applications. We’ll begin by working individually with the producer and consumer clients to understand how each works.

4.1 Introducing Kafka clients

In their simplest form, clients operate like this: producers send records (in a produce request) to a broker, the broker stores them in a topic, consumers send a fetch request, and the broker retrieves records from the topic to fulfill that request (figure 4.1). When we talk about the Kafka event streaming platform, it’s common to mention producers and consumers. After all, it’s a safe assumption that you produce data for someone else to consume. But it’s essential to understand that the producers and consumers are unaware of each other; there’s no synchronization between these two clients.

Figure 4.1 Producers send batches of records to Kafka in a produce request.
figure

KafkaProducer has one task: sending records to the broker. The records themselves contain all the information the broker needs to store them.

4.2 Producing records with the KafkaProducer

4.2.1 Producer configurations

4.2.2 Kafka delivery semantics

4.2.3 Partition assignment

4.2.4 Writing a custom partitioner

4.2.5 Specifying a custom partitioner

4.2.6 Timestamps

4.3 Consuming records with the KafkaConsumer

4.3.1 The poll interval

4.3.2 The group id configuration

4.3.3 Applying partition assignment strategies

4.3.4 Static membership

4.3.5 Committing offsets

4.4 Exactly-once delivery in Kafka

sitemap