12 Optimizations

 

This chapter covers

  • Delving into the concept of mechanical sympathy
  • Understanding heap vs. stack and reducing allocations
  • Using standard Go diagnostics tooling
  • Understanding how the garbage collector works
  • Running Go inside Docker and Kubernetes

Before we begin this chapter, a disclaimer: in most contexts, writing readable, clear code is better than writing code that is optimized but more complex and difficult to understand. Optimization generally comes with a price, and we advocate that you follow this famous quote from software engineer Wes Dyer:

Make it correct, make it clear, make it concise, make it fast, in that order.

That doesn’t mean optimizing an application for speed and efficiency is prohibited. For example, we can try to identify code paths that need to be optimized because there’s a need to do so, such as making our customers happy or reducing our costs. Throughout this chapter, we discuss common optimization techniques; some are specific to Go, and some aren’t. We also discuss methods to identify bottlenecks so we don’t work blindly.

12.1 #91: Not understanding CPU caches

Mechanical sympathy is a term coined by Jackie Stewart, a three-time F1 world champion:

You don’t have to be an engineer to be a racing driver, but you do have to have mechanical sympathy.

12.1.1 CPU architecture

12.1.2 Cache line

12.1.3 Slice of structs vs. struct of slices

12.1.4 Predictability

12.1.5 Cache placement policy

12.2 #92: Writing concurrent code that leads to false sharing

12.3 #93: Not taking into account instruction-level parallelism

12.4 #94: Not being aware of data alignment

12.5 #95: Not understanding stack vs. heap

sitemap