7 Task-based functional parallelism

 

This chapter covers

  • Task parallelism and declarative programming semantics
  • Composing parallel operations with functional combinators
  • Maximizing resource utilization with the Task Parallel Library
  • Implementing a parallel functional pipeline pattern

The task parallelism paradigm splits program execution and runs each part in parallel by reducing the total runtime. This paradigm targets the distribution of tasks across different processors to maximize processor utilization and improve performance. Traditionally, to run a program in parallel, code is separated into distinct areas of functionality and then computed by different threads. In these scenarios, primitive locks are used to synchronize the access to shared resources in the presence of multiple threads. The purpose of locks is to avoid race conditions and memory corruption by ensuring concurrent mutual exclusion. The main reason locks are used is due to the design legacy of waiting for the current thread to complete before a resource is available to continue running the thread.

7.1 A short introduction to task parallelism

7.1.1 Why task parallelism and functional programming?

7.1.2 Task parallelism support in .NET

7.2 The .NET Task Parallel Library

7.2.1 Running operations in parallel with TPL Parallel.Invoke

7.3 The problem of void in C#

7.3.1 The solution for void in C#: the unit type

7.4 Continuation-passing style: a functional control flow

7.4.1 Why exploit CPS?

7.4.2 Waiting for a task to complete: the continuation model

7.5 Strategies for composing task operations

7.5.1 Using mathematical patterns for better composition

7.5.2 Guidelines for using tasks

7.6 The parallel functional Pipeline pattern

Summary

sitemap