2 Data ingestion patterns
This chapter covers
- Understand what’s involved in data ingestion and what data ingestion is responsible for.
- Handle large datasets in memory by consuming datasets by small batches with the batching pattern.
- Preprocess extremely large datasets as smaller chunks that are located in multiple machines with the sharding pattern.
- Fetch and re-access the same dataset more efficiently for multiple training rounds with the caching pattern.
In the previous chapter, we’ve discussed the growing scale of modern machine learning applications, e.g. larger datasets and heavier traffic for model serving. We’ve also talked about the complexity and challenges in building distributed systems and distributed systems for machine learning applications in particular. We’ve learned that a distributed machine learning system is usually a pipeline of many different components, such as data ingestion, model training, serving, monitoring, etc., where there are some established patterns for designing each individual component to handle the scale and complexity of real-world machine learning applications.
Data ingestion is the first step and an inevitable step in a machine learning pipeline. All data analysts and scientists should have some level of exposure to data ingestion. It could be either hands-on experience in building a data ingestion component or simply using a dataset from the engineering team or customer handed over to them.