3 Submitting and scaling your first PySpark program

 

This chapter covers

  • Summarizing data using groupby and a simple aggregate function
  • Ordering results for display
  • Writing data from a data frame
  • Using spark-submit to launch your program in batch mode
  • Simplifying PySpark writing using method chaining
  • Scaling your program to multiple files at once

Chapter 2 dealt with all the data preparation work for our word frequency program. We read the input data, tokenized each word, and cleaned our records to only keep lowercase words. If we bring out our outline, we only have steps 4 and 5 to complete:

  1. [DONE]Read: Read the input data (we’re assuming a plain text file).
  2. [DONE]Token: Tokenize each word.
  3. [DONE]Clean: Remove any punctuation and/or tokens that aren’t words. Lowercase each word.
  4. Count: Count the frequency of each word present in the text.
  5. Answer: Return the top 10 (or 20, 50, 100).

After tackling those two last steps, we look at packaging our code in a single file to be able to submit it to Spark without having to launch a REPL. We also take a look at our completed program and at simplifying it by removing intermediate variables. We finish with scaling our program to accommodate more data sources.

3.1 Grouping records: Counting word frequencies

3.2 Ordering the results on the screen using orderBy

3.3 Writing data from a data frame

3.4 Putting it all together: Counting

3.4.1 Simplifying your dependencies with PySpark’s import conventions

3.4.2 Simplifying our program via method chaining

3.5 Using spark-submit to launch your program in batch mode

3.6 What didn’t happen in this chapter

3.7 Scaling up our word frequency program

Summary

Additional Exercises

Exercise 3.3

Exercise 3.4