16 Concurrency

 

This chapter covers

  • Running computations concurrently
  • Synchronizing threads with mutable variables and channels
  • Using software transactional memory

Almost every server or desktop application nowadays runs concurrently over many CPU cores. Once the result of a limitation on increasing single-processor speed, this becomes an opportunity to organize our applications in new ways. An application becomes a collection of tasks that run in dedicated threads. For multithreading, we can use either operating system threads or library threads. The latter are lightweight, so we can run many of them. Threads work concurrently, so we need to synchronize them.

In this chapter, we’ll discuss Haskell approaches to develop concurrent applications around threads. We’ll start with a discussion of Haskell mechanisms to run computations concurrently. We are interested in both low-level mechanisms available to Haskell developers from the base package and more sophisticated high-level tools provided by other libraries. Once we can run threads, we need them to communicate with each other. We’ll discuss how we do that in Haskell in the second section of the chapter.

Concurrent programming is generally considered a hard problem, no matter the language or the library we use. It’s easy to make mistakes, though high-level approaches are usually better at keeping those mistakes under control. We’ll discuss the best practices and see how to avoid common pitfalls.

16.1 Running computations concurrently

16.1.1 An implementation of concurrency in GHC

16.1.2 Low-level concurrency with threads

16.1.3 High-level concurrency with the async package

16.2 Synchronization and communication

16.2.1 Synchronized mutable variables and channels

16.2.2 Software transactional memory (STM)

Summary