Chapter 4. The Spark API in depth
This chapter covers
- Working with key-value pairs
- Data partitioning and shuffling
- Grouping, sorting, and joining data
- Using accumulators and broadcast variables
The previous two chapters explained RDDs and how to manipulate them with basic actions and transformations. You’ve seen how to run Spark programs from Spark REPL and how to submit standalone applications to Spark.
In this chapter, you’ll delve further into the Spark Core API and become acquainted with a large number of Spark API functions. But don’t faint just yet! We’ll be gentle, go slowly, and take you safely through these complicated and comprehensive, but necessary, topics.
You’ll also learn how to use RDDs of key-value pairs called pair RDDs. You’ll see how Spark partitions data and how you can change and take advantage of RDD partitioning. Related to partitioning is shuffling, which is an expensive operation, so you’ll focus on avoiding unnecessary shuffling of data. You’ll also learn how to group, sort, and join data. You’ll learn about accumulators and broadcast variables and how to use them to share data among Spark executors while a job is running. Finally, you’ll get into more advanced aspects of the inner workings of Spark, including RDD dependencies. Roll up your sleeves!