This chapter covers
- Drawing a parallel between PySpark’s instruction set and the SQL vocabulary
- Registering data frames as temporary views or tables to query them using Spark SQL
- Using the catalog to create, reference, and delete registered tables for SQL querying
- Translating common data manipulations instructions from Python to SQL, and vice versa
- Using SQL-style clauses inside certain PySpark methods
When it comes to manipulating tabular data, SQL is the reigning king. For multiple decades now, it has been the workhorse language for relational databases, and even today, learning how to tame it is a worthwhile exercise. Spark acknowledges the power of SQL head-on. You can seamlessly blend SQL code within your Spark or PySpark program, making it easier than ever to migrate those old SQL ETL jobs without reinventing the wheel.
This chapter is dedicated to using SQL with, and on top of, PySpark. I cover how we can move from one language to the other. I also cover how we can use a SQL-like syntax within data frame methods to speed up your code, and some of the trade-offs you may face. Finally, we blend Python and SQL code to get the best of both worlds.