13 Centralizing logs with Fluentd and Elasticsearch
Applications generate lots of logs which often aren't very useful. As you scale up your apps across multiple Pods running in a cluster it's very difficult to manage those logs using standard Kubernetes tooling. Organizations usually deploy their own logging framework which uses a collect-and-forward model to read container logs and send them to a central store where they can be indexed, filtered and searched. You'll learn how to do that in this chapter using the most popular technologies in this space - Fluentd and Elasticsearch. Fluentd is the collector component and it has some very nice integrations with Kubernetes; Elasticsearch is the storage component and it can run as Pods in the cluster or as an external service.
There are a couple of points to be aware of before we start. The first is that this model assumes your application logs are written to the container's standard output streams so Kubernetes can find them. We covered that in chapter 7, with sample apps that wrote to standard out directly or used a logging sidecar to relay logs. The second is that the logging model in Kubernetes is very different from Docker - so if you've read Learn Docker in a Month of Lunches, this chapter takes a different approach.