7 Use case: LRU cache

 

This chapter covers

  • Avoiding computing things twice
  • Introducing caching as a solution
  • Describing different types of caches
  • Designing an efficient solution for LRU cache
  • Discussing MFU and other choices for handling priority
  • Discussing caching strategies
  • Reasoning about concurrency and synchronization
  • Describing how caches can be applied in a sentiment analyzer’s pipeline

This chapter is going to be different from what you have seen so far in this book. We are not going to introduce a new data structure, but instead we will use the ones described in chapters 2–5 to create a more complex data structure. By using lists, queues, hash tables, and so on as building blocks, we will be able to create an advanced data structure that can be used to quickly access values and computed results that were recently accessed—although one could argue that a cache is more than just a data structure; it’s a sophisticated component with several moving parts. In its simplest form, a cache can be implemented as an associative array, but we will see during the course of this chapter that it can become as complex as a web service.

In this chapter, we’ll delve into the details of how caches of increasing complexity work, so after reading it, you should be able to articulate an informed opinion on the topic.

7.1 Don’t compute things twice

7.2 First attempt: Remembering values

7.2.1 Description and API

7.2.2 Fresh data, please

7.2.3 Handling asynchronous calls

7.2.4 Marking cache values as “Loading”

7.3 Memory is not enough (literally)

7.4 Getting rid of stale data: LRU cache

7.4.1 Sometimes you have to double down on problems

7.4.2 Temporal ordering

7.4.3 Performance

7.5 When fresher data is more valuable: LFU

7.5.1 So how do we choose?

7.5.2 What makes LFU different

7.5.3 Performance

7.5.4 Problems with LFU

sitemap