Part 2: Data

 

Now that we have fundamentals out of the way, this part focuses on optimizations related to data storage and access patterns.

In Chapter 3, we first discuss colocation strategies, ranging from geographical considerations to intra-node optimizations, including kernel-bypass networking, which reduces latency by moving data closer to the compute node.

Chapter 4 covers replication techniques, consistency models, and approaches such as single-leader and multi-leader replication, which reduce latency by maintaining multiple copies of the data.

We then move in Chapter 5 to discuss partitioning strategies, including both physical and logical approaches, as well as request routing techniques, which reduce latency by eliminating contention on data.

Finally, Chapter 6 concludes this section with comprehensive caching strategies, ranging from cache-aside to distributed caching, along with coherency and replacement policies. These strategies aim to optimize latency by maintaining multiple copies of data, but with different trade-offs compared to replication.