This chapter covers
- Thinking asynchronously: an overview of async programming
- Exploring Rust’s async runtimes
- Handling async task results with futures
- Mixing sync and async
- Using the
async
&.await
features
- Managing concurrency & paralellism with async
- Implementing an async observer
- When not to use async
- Tracing & debugging async code
- Dealing with async when testing
Concurrency is an important concept in computing, and it’s one of the greatest force multipliers of computers. Concurrency allows us to process inputs and outputs–such as data, network connections, or peripherals–faster than we might be able to do without concurrency. And it’s not always about speed, but also latency, overhead, and system complexity. We can run thousands or millions of tasks concurrently, as illustrated in figure 11.1, because concurrent tasks tend to be relatively light-weight. We can create, destroy, and manage many concurrent tasks with very little overhead.
Asynchronous programming uses concurrency to take advantage of idle processing time between tasks. Some kinds of tasks, such as I/O, are much slower than ordinary CPU instructions, and after a slow task is started we can set it aside to work on other tasks while waiting for the slow tasks to complete.