8 Caching techniques

 

This chapter covers

  • Caching overview
  • HTTP Response Caching (client-side, intermediate, and server-side)
  • In-Memory Caching
  • Distributed Caching (using SQL Server and Redis)

In Information Technology, the term "cache" describes a hardware component or a software mechanism that can be used to store data, so that future requests that require such data can be served faster and - most importantly - without having to retrieve it from scratch; good caching practices often results in performance benefits, lower latency, less CPU overhead, reduced bandwidth usage, and decreased costs.

From the above definition, we can easily understand how adopting and implementing a caching strategy can bring many invaluable optimization advantages. This is especially true for web applications and services (including Web APIs), since they often have to deal with recurring requests targeting the same resources - such as the same HTML page(s) or JSON result(s) accessed by multiple users, just to make some common examples. However, introducing a caching mechanism also adds some complexity to our code, and might easily cause unwanted side effects when not implemented properly.

8.1 Caching overview

8.2 HTTP Response Caching

8.2.1 Manually setting the cache-control header

8.2.2 Adding a default caching directive

8.2.3 Defining Cache profiles

8.2.4 Server-side Response Caching

8.2.5 Response caching vs Client reload

8.3 In-memory Caching

8.3.1 Setting up the in-memory cache

8.3.2 Injecting the IMemoryCache interface

8.3.3 Using the in-memory cache

8.4 Distributed Caching

8.4.1 Distributed Cache Providers overview

8.4.2 SQL Server

8.4.3 Redis

8.5 Exercises

8.5.1 HTTP Response Caching

8.5.2 Cache Profiles

8.5.3 Server-side Response Caching

8.5.4 In-memory Caching

8.5.5 Distributed Caching

8.6 Summary