10 Asynchronous processing
This chapter covers
- Comparing asynchronous and synchronous processing
- Understanding the event loop
- Hiding latency with async I/O and deferring work
- Handling errors in async systems
- Observing async systems
Welcome to the fourth and final part of the book!
Throughout the book, we’ve built a comprehensive understanding of latency optimization. In Part 1, we established the foundations by exploring the fundamental nature of latency, why it’s so important, and essential techniques for modeling and measuring it. In Part 2, we explored data-centric latency optimization strategies such as partitioning and caching, while in Part 3, we explored code-level techniques to reduce latency.
In this part of the book, we turn our attention to how to hide latency. This approach becomes critical when you’ve exhausted latency optimization methods or are hitting constraints in your system architecture. For example, you may have hit the physical limits of your hardware, or you’re working with third-party systems that you cannot change. In such scenarios, latency-hiding techniques – asynchronous processing and predictive methods – become critical when you need to improve the latency of your application.