10 Concurrency patterns

 

This chapter covers

  • Decomposing programs by task
  • Decomposing programs by data
  • Recognizing common concurrency patterns

When we have a job to do and many helping hands, we need to decide how to divide the work so that it’s completed efficiently. A significant task in developing a concurrent solution is identifying mostly independent computations—tasks that do not affect each other if they are executed at the same time. This process of breaking down our programming into separate concurrent tasks is known as decomposition.

In this chapter, we shall see techniques and ideas for performing this decomposition. Later, we’ll discuss common implementation patterns used in various concurrent scenarios.

10.1 Decomposing programs

How can we convert a program or an algorithm so that it can run more efficiently using concurrent programming? Decomposition is the process of subdividing a program into many tasks and recognizing which of these tasks can be executed concurrently. Let’s pick a real-life example to see how decomposition works.

Imagine we are in a car, driving along with a group of friends. Suddenly, we hear weird noises coming from the front of the car. We stop to check and find that we have a flat tire. Not wanting to be late, we decide to replace the wheel with the spare instead of waiting for a tow truck. Here are the steps we need to perform:

10.1.1 Task decomposition

10.1.2 Data decomposition

10.1.3 Thinking about granularity

10.2 Concurrency implementation patterns

10.2.1 Loop-level parallelism

10.2.2 The fork/join pattern

10.2.3 Using worker pools

sitemap