4 Kafka clients

 

This chapter covers

  • Producing records with the KafkaProducer
  • Understanding message delivery semantics
  • Consuming records with the KafkaConsumer
  • Learning about Kafka’s exactly-once streaming
  • Using the Admin API for programmatic topic management
  • Handling multiple event types in a single topic

This chapter is where the "rubber hits the road" and we take what you’ve learned over the previous two chapters and apply it here to start building event streaming applications. We’ll start out by working with the producer and consumer clients individually to gain a deep understanding how each one works.

In their simplest form, clients operate like this: producers send records (in a produce request) to a broker and the broker stores them in a topic and consumers send a fetch request and the broker retrieves records from the topic to fulfill that request. When we talk about the Kafka event streaming platform, it’s common to mention both producers and consumers. After all, it’s a safe assumption that you are producing data for someone else to consume. But it’s very important to understand that the producers and consumers are unaware of each other, there’s no synchronization between these two clients.

Figure 4.1. Producers send batches of records to Kafka in a produce request
shipping event stream

The KafkaProducer has just one task, sending records to the broker. The records themselves contain all the information the broker needs to store them.

4.1 Producing records with the KafkaProducer

4.1.1 Producer configurations

4.1.2 Kafka delivery semantics

4.1.3 Partition assignment

4.1.4 Writing a custom partitioner

4.1.5 Specifying a custom partitioner

4.1.6 Timestamps

4.2 Consuming records with the KafkaConsumer

4.2.1 The poll interval

4.2.2 Group id

4.2.3 Static membership

4.2.4 Committing offsets

4.3 Exactly once delivery in Kafka

sitemap