Chapter 10. Serving layer
This chapter covers
- Tailoring batch views to the queries they serve
- A new answer to the data-normalization versus denormalization debate
- Advantages of batch-writable, random-read, and no random-write databases
- Contrasting a Lambda Architecture solution with a fully incremental solution
At this point you’ve learned how to precompute arbitrary views of any dataset by making use of batch computation. For the views to be useful, you must be able to access their contents with low latency, and as shown in figure 10.1, this is the role of the serving layer. The serving layer indexes the views and provides interfaces so that the precomputed data can be quickly queried.
Figure 10.1. In the Lambda Architecture, the serving layer provides low-latency access to the results of calculations performed on the master dataset. The serving layer views are slightly out of date due to the time required for batch computation.
The serving layer is the last component of the batch section of the Lambda Architecture. It’s tightly tied to the batch layer because the batch layer is responsible for continually updating the serving layer views. These views will always be out of date due to the high-latency nature of batch computation. But this is not a concern, because the speed layer will be responsible for any data not yet available in the serving layer.