8 Caching techniques

 

This chapter covers

  • Caching overview
  • HTTP response caching (client-side, intermediate, and server-side)
  • In-memory caching
  • Distributed caching (using SQL Server or Redis)

In information technology, the term cache describes a hardware component or a software mechanism that can be used to store data so that future requests that require such data can be served faster and—most important—without being retrieved from scratch. Good caching practices often result in performance benefits, lower latency, less CPU overhead, reduced bandwidth use, and decreased costs.

Based on this definition, we can understand that adopting and implementing a caching strategy can create many invaluable optimization advantages. These advantages are especially important for web applications and services (including web APIs), which often have to deal with recurring requests targeting the same resources, such as the same HTML page(s) or JSON result(s) accessed by multiple users. But introducing a caching mechanism also adds some complexity to our code and might easily cause unwanted side effects when we don’t implement it properly.

8.1 Caching overview

8.2 HTTP response caching

8.2.1 Setting the cache-control header manually

8.2.2 Adding a default caching directive

8.2.3 Defining cache profiles

8.2.4 Server-side response caching

8.2.5 Response caching vs. client reload

8.3 In-memory caching

8.3.1 Setting up the in-memory cache

8.3.2 Injecting the IMemoryCache interface

8.3.3 Using the in-memory cache

8.4 Distributed caching

8.4.1 Distributed cache providers overview

8.4.2 SQL Server

8.4.3 Redis