4 Creating Consumer Applications
This chapter covers
- Receiving messages from Kafka
- Principles of parallel message reception
- Common challenges in Kafka consumer handling
- Accessing Kafka via HTTP
- Utilizing data compression in Kafka
Gathered once more in their meeting room, Max Sellington, Rob Routine, and Eva Catalyst focus on the next stage of their project: developing the consumer application. With topics set for customer profile data and transactions, they now delve into the intricacies of data aggregation, determining how best to handle and merge the incoming streams.
MAX: Alright, let's dive into the consumer application for our project. We've got those two input topics—one for customer profile data and the other for transactions. Remember how we settled on using customer ID as the key for profile data and transaction ID for transactions? Now, we're looking at aggregating data by customer ID. So, what's the plan for the consumer service? How's it going to handle and mash up all that data?
EVA: Easy, it's just going to stash the data away in the relational database.
ROB: The consumer side isn't straightforward at all. How many instances of our aggregated service should we have? It will pull data, but how much data should it receive in one request to avoid memory issues? And if it fails, how quickly does the cluster detect it?
EVA: Just like with the producers, there’s a lot of configuration required to make the application work properly. We need to examine all these properties.