6 Defining Data Contracts
This chapter covers
- Event design principles
- Supporting data contracts in Kafka
- Type evolution and schema changes
- Common challenges in managing data contracts
The team gathers to discuss an important question about how to define data contracts.
Max: Alright team, we haven't really talked about events yet. What exactly is an event, and how do we define it when systems communicate through Kafka? It's not like a service with a clear endpoint and parameters. So, how does a consumer know what data to expect?
Eva (nodding): Good question, Max. Let’s break it down. An event is a record of something that has happened in the system. It’s immutable, meaning it can’t be changed once created, and it includes a timestamp to show when it occurred. The context of the event gives it meaning and relevance. In Kafka, you define the schema for the events. The schema acts as the contract between the producer and the consumer. It specifies the structure of the data being exchanged. This is the most important topic for us as architects because it ensures consistency and reliability across our systems.
Max: Got it. So, are these schemas defined in XML?
Rob (laughing): XML? Nobody uses XML anymore, Max. That went out with dial-up internet. We use more modern formats like Avro, JSON Schema, and Protobuf. But if you're feeling nostalgic, you could always define custom serializers.