This chapter covers:
- Writing job queries using U-SQL
- Creating U-SQL jobs
- Creating a Data Lake Analytics service
- Estimating appropriate parallelization for U-SQL jobs
In the last chapter, you used Azure Stream Analytics as a source for raw data, using a passthrough query. The passthrough query takes the incoming data and passes it to the output, in this case files in an Azure Data Lake store (ADL).
Figure 7.1. Lambda architecture with Azure PaaS Speed layer

This is the latest example of prep work for batch processing, which includes loading files into storage and saving groups of messages into files. Azure Storage accounts, Data Lakes, and Event Hubs services set the base for building a batch processing analytics system in Azure. With files in the ADL, you’re ready to start doing batch processing.