Thus far, we haven’t been concerned with the state in which massive data arrives at our disposal. All the algorithms we have gotten to know so far can be applied to continuously arriving data as well as to historical data residing in a big database system. The three chapters in part 2 present algorithms and data structures (sketches) whose design considerations and application context were driven by the continuous arrival of data tuples referred to as data streams. Here, due to the transient nature of the data at hand, algorithms have to operate efficiently and incorporate knowledge about the stream after each tuple seen. We achieve this by keeping sketches of a data stream. Some of them, like random samples, are general and can answer many queries about the data. Others, like the t-digest, are more specialized, and the algorithm/data structure is tailored to return a specific feature of the data, like different (tail) percentiles. All in all, imagining a lot of data arriving at nonuniform speeds and, once operated on, leaving into oblivion, is a good starting point for things to come.