10 Asynchronous processing

 

This chapter covers

  • Comparing asynchronous and synchronous processing
  • Understanding the event loop
  • Hiding latency with async I/O and deferring work
  • Handling errors in async systems
  • Observing async systems

Welcome to the fourth and final part of the book!

Throughout the book, we’ve built a comprehensive understanding of latency optimization. In Part 1, we established the foundations by exploring the fundamental nature of latency, why it’s so important, and essential techniques for modeling and measuring it. In Part 2, we explored data-centric latency optimization strategies such as partitioning and caching, while in Part 3, we explored code-level techniques to reduce latency.

In this part of the book, we turn our attention to how to hide latency. This approach becomes critical when you’ve exhausted latency optimization methods or are hitting constraints in your system architecture. For example, you may have hit the physical limits of your hardware, or you’re working with third-party systems that you cannot change. In such scenarios, latency-hiding techniques – asynchronous processing and predictive methods – become critical when you need to improve the latency of your application.

10.1 Fundamentals

10.1.1 Asynchronous vs. Synchronous Processing

10.1.2 The Event Loop

10.1.3 Challenges

10.2 Asynchronous I/O

10.2.1 I/O multiplexing

10.2.2 Request batching

10.2.3 Request hedging

10.2.4 Buffered I/O

10.2.5 Memory Mapping

10.3 Deferring Work

10.3.1 Task Scheduling

10.3.2 Priority Queues

10.3.3 Work Stealing

10.4 Resource Management

10.4.1 Thread Pools

10.4.2 Memory Pools

10.4.3 Connection Pools

10.5 Managing Concurrency with Backpressure

10.5.1 Controlling the Producer

10.5.2 Buffering

10.5.3 Dropping and rate limiting

10.6 Error Handling

10.6.1 Partial Errors

10.6.2 Recovery

10.6.3 Timeouts and Cancellation

10.7 Observability

10.7.1 Tracing

10.7.2 Metrics

10.8 Summary