13 Centralizing logs with Fluentd and Elasticsearch

 

Applications generate lots of logs, which often aren’t very useful. As you scale up your apps across multiple Pods running in a cluster, it’s difficult to manage those logs using standard Kubernetes tooling. Organizations usually deploy their own logging framework, which uses a collect-and-forward model to read container logs and send them to a central store where they can be indexed, filtered, and searched. You’ll learn how to do that in this chapter using the most popular technologies in this space: Fluentd and Elasticsearch. Fluentd is the collector component, and it has some nice integrations with Kubernetes; Elasticsearch is the storage component and can run either as Pods in the cluster or as an external service.

You should be aware of a couple of points before we start. The first is that this model assumes your application logs are written to the container’s standard output streams so Kubernetes can find them. We covered that in chapter 7, with sample apps that wrote to standard out directly or used a logging sidecar to relay logs. The second is that the logging model in Kubernetes is very different from Docker. Appendix D in the ebook shows you how to use Fluentd with Docker, but with Kubernetes, we’ll take a different approach.

13.1 How Kubernetes stores log entries

13.2 Collecting logs from nodes with Fluentd

13.3 Shipping logs to Elasticsearch

13.4 Parsing and filtering log entries

13.5 Understanding logging options in Kubernetes

13.6 Lab

sitemap