Chapter 14. Queuing and stream processing
This chapter covers
- Single-consumer versus multi-consumer queues
- One-at-a-time stream processing
- Limitations of queues-and-workers approaches to stream processing
You’ve learned of two types of architectures for the speed layer: synchronous and asynchronous. With synchronous architectures, applications send update requests directly to the database and block until receiving a response. Such applications require the coordination of different tasks, but there’s not much to add to the discussion from an architectural standpoint. Conversely, asynchronous architectures update the speed layer databases independently from the application that created the data. How you decide to persist and process the update requests directly affects the scalability and fault tolerance of your entire system.
This chapter covers the basics of queuing and stream processing, the two foundations of asynchronous architectures. You earlier saw that the key to batch processing is the ability to withstand failures and retry computations when necessary. The same tenets carry over to the speed layer, as fault tolerance and retries are of the utmost importance in stream-processing systems. As usual, the incremental nature of the speed layer makes the story more complex, and there are many more trade-offs to keep in mind as you design applications.