chapter seven
                    7 Eliminating work
This chapter covers
- Eliminating work by taming algorithmic complexity
 - Reducing serialization overhead
 - Managing memory with low latency
 - Mitigating OS overhead
 - Replacing slow computations with precomputing
 
In the previous part of the book, we examined different techniques for organizing data when designing an application for low latency. The part introduced techniques that aim to reduce latency to ensure data access is not a latency bottleneck for your application:
- Colocation brings two components closer together.
 - Replication maintains multiple (consistent) copies of the data.
 - Partitioning reduces synchronization costs.
 - Caching temporarily keeps a copy of the data.
 
In other words, we looked at how your data organization decisions impact latency and what you can do to mitigate that. In this third part of the book, we’re switching gears to explore how you can build low-latency applications from the computational perspective, focusing on how you can structure your application logic when building for low latency.