This chapter covers:
- Developing an ideal Kafka Maturity Model
- Focusing on the value schemas can provide for your data as it changes
- Reviewing Avro and its use in data serialization
- Determining compatibility rules for schema changes over time
As we have discovered the various ways to use Apache Kafka, it might be an interesting experiment to think through how you view Kafka the more you utilize it.
As enterprises (or even tools) grow, they can sometimes be modeled with maturity levels. Martin Fowler provides a great explanation at this site: martinfowler.com/bliki/MaturityModel.html [175]. Fowler also has a clear example of explaining the Richardson Maturity Model which looks at REST in such a model [176]. For even furher reference, the original talk writeup, "Justice Will Take Us Millions Of Intricate Moves: Act Three: The Maturity Heuristic" by Leonard Richardson can by found at www.crummy.com/writing/speaking/2008-QCon/act3.html [177]. The following model is our opinions for maturity levels specific to Kafka. For a comparison to a different experienced perspective, check out the Confluent whitepaper titled, "Five Stages to Streaming Platform Adoption", which has five stages of their streaming maturity model with different criteria for each stage [178].
Let’s look at our first level: of course, as programmers we’re starting with Level 0.