Chapter 11. Data streaming with Apache Kafka

 

This chapter covers

  • Understanding data streaming with Apache Kafka
  • Using data streaming to simplify an architecture
  • Incorporating data streaming into the Cayambe hybrid

In chapter 10, you put together the Cayambe hybrid, combining the slimmed-down monolith with your new microservices. This chapter simplifies the access of administration data in the Cayambe hybrid by switching it to use data streaming.

First you’ll learn about data streaming and how it can benefit developers and architects alike. Taking those lessons, you’ll develop a data-streaming solution for the Cayambe hybrid from the previous chapter, completing the journey from monolith to hybrid.

11.1. What can Apache Kafka do for you?

Before delving into Apache Kafka, the solution you’re going to use for recording and processing data streams, you need some background on data streaming. Otherwise, what Apache Kafka does and how it works will be completely foreign to you as an Enterprise Java developer.

11.1.1. Data streaming

Data streaming doesn’t just refer to the way Netflix gets its movies to play on all your devices. It also refers to a continuously generated stream of data from potentially thousands of sources; each piece of data, or record, is small in size and is stored in the sequence that it was received. That may seem like a lot of buzzwords, but data streaming is still relatively new, and new ways to use it are always being conceived.

11.2. Simplifying your monolith architecture with streaming

11.3. Deploying and using Kafka for data streaming

11.4. Additional exercises

Summary