concept task parallelism in category .net

appears as: task parallelism, Task parallelism
Concurrency in .NET

This is an excerpt from Manning's book Concurrency in .NET.

In the context of task parallelism, be aware of variables captured in closures: because closures capture the reference of a variable, not its actual value, you can end up sharing what isn’t obvious. Closures are a powerful technique that you can use to implement patterns to increase the performance of your program.

7.1 A short introduction to task parallelism

Task parallelism refers to the process of running a set of independent tasks in parallel across several processors. This paradigm partitions a computation into a set of smaller tasks and executes those smaller tasks on multiple threads. The execution time is reduced by simultaneously processing multiple functions.

In general, parallel jobs begin from the same point, with the same data, and can either terminate in a fire-and-forget fashion or complete altogether in a task-group continuation. Any time a computer program simultaneously evaluates different and autonomous expressions using the same starting data, you have task parallelism. The core of this concept is based on small units of computations called futures. Figure 7.1 shows the comparison between data parallelism and task parallelism.

Figure 7.1 Data parallelism is the simultaneous execution of the same function across the elements of a data set. Task parallelism is the simultaneous execution of multiple and different functions across the same or different data sets.

c07-01.png
Task parallelism isn’t data parallelism

Chapter 4 explains the differences between task parallelism and data parallelism. To refresh your memory, these paradigms are at two ends of the spectrum. Data parallelism occurs when a single operation is applied to many inputs. Task parallelism occurs when multiple diverse operations perform against their own input. It is used to query and call multiple Web APIs at one time, or to store data against different database servers. In short, task parallelism parallelizes functions; data parallelism parallelizes data.

Task parallelism achieves its best performance by adjusting the number of running tasks, depending on the amount of parallelism available on your system, which corresponds to the number of available cores and, possibly, their current loads.

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
test yourself with a liveTest