10 Concurrency patterns

 

This chapter covers

  • Decomposing programs by task
  • Decomposing programs by data
  • Recognizing common concurrency patterns

When we have a job to do and many helping hands, we need to decide how to divide the work so that it’s completed efficiently. A significant task in developing a concurrent solution is about identifying mostly independent computations — tasks that do not affect each other if they are executed at the same time. This process of breaking down our programming into separate concurrent tasks is known as decomposition.

In this chapter, we shall see techniques and ideas for how to perform this decomposition. Later we shall discuss common implementation patterns used in various concurrent scenarios.

10.1 Decomposing programs

How do we convert a program or an algorithm so that it can run more efficiently by using concurrent programming? Decomposition is the process of subdividing a program into many tasks and recognizing which of these tasks can be executed in a concurrent fashion. Let’s pick a real-life example to see how decomposition works.

Imagine we are in a car, driving along with a group of friends. Suddenly we hear weird noises coming from the front of the car. We stop to check and then find that we have a flat tire. Not wanting to be late, we decide to attempt to replace the wheel with the spare instead of waiting for the tow truck. Here are the steps we need to perform:

10.1.1 Task decomposition

10.1.2 Data decomposition

10.1.3 Thinking about granularity

10.2 Concurrency implementation patterns

10.2.1 Loop-level parallelism

10.2.2 Fork/Join

10.2.3 Worker pool

10.2.4 Pipelining

10.2.5 Pipelining properties

10.3 Summary

10.4 Exercises

sitemap