7 Purely functional parallelism

 

This chapter covers

  • Designing a purely functional library
  • Choosing appropriate data types and functions to model the domain
  • Reasoning about an API in terms of an algebra to discover types
  • Defining laws to govern API behavior
  • Generalizing combinators

Because modern computers have multiple cores per CPU and often multiple CPUs, it’s more important than ever to design programs in such a way that they can take advantage of this parallel processing power. But the interaction of parallel processes is complex, and the traditional mechanism for communication among execution threads—shared mutable memory—is notoriously difficult to reason about. This can all too easily result in programs that have race conditions and deadlocks, aren’t readily testable, and don’t scale well.

In this chapter, we build a purely functional library to create parallel and asynchronous computations. We’ll rein in the complexity inherent in parallel programs by describing them using only pure functions. This will let us use the substitution model to simplify our reasoning and hopefully make working with concurrent computations both easy and enjoyable.

7.1 Choosing data types and functions

7.1.1 A data type for parallel computations

7.1.2 Combining parallel computations to ensure concurrency

7.1.3 Marking computations to be forked explicitly

7.2 Picking a representation

7.3 Refining the API with the end user in mind

7.4 Reasoning about the API in terms of algebraic equations

7.4.1 The law of mapping

7.4.2 The law of forking

7.4.3 Using actors for a non-blocking implementation

7.5 Refining combinators to their most general form

Summary