This chapter covers
- Drawing a parallel between PySpark’s instruction sets and the SQL vocabulary.
- Registering data frames as temporary views or tables to query them using Spark SQL.
- Using the catalog to create, reference, and delete registered tables for SQL querying.
- Translating common data manipulations instructions from Python to SQL and vice-versa.
- Using SQL-style clauses inside certain PySpark methods.
My answer to "Python versus SQL, which one should I learn?" is "yes".
When it comes to manipulating tabular data, SQL is the reigning king. For multiple decades now, it has been the workhorse language for relational databases, and even today, learning how to tame it is a worthwhile exercise. Spark acknowledge the power of SQL heads on. You can seamlessly blend SQL code within your Spark or PySpark program, making it easier than ever to migrate those old SQL ETL (extract, transform, load) jobs without reinventing the wheel.
This chapter is dedicated to using SQL with, and on top of PySpark. I cover how we can move from one language to the other. I also cover how we can use a SQL-like syntax within data frame methods to speed up your code and some of trade-offs you can face. Finally, we blend Python and SQL code together to get the best of both worlds.