4 Using Fluentd to output log events
This chapter covers
- Buffering options and application of buffers to give I/O efficiencies
- Handling buffer overloads other risks that come with buffering
- Using output plugins for files, MongoDB and Slack
- Using out of the box Formatters to structure the data for the target
- Examining how Buffer plugins behave, and how it enables (or could hinder) the processing of streams of log events
In Chapter 3 we saw how log events can be captured and how helper plugins such as parsers come into play. But capturing data is only of value if we can do something meaningful with it, such as delivery to an endpoint formatted so the log events can be used, for example, stored in a Log Analytics engine, or sent as a message to an ops team of investigation. This chapter is about showing how Fluentd enables us to do that. Having managed to configure some source inputs, this chapter explores how to get the data back out. We look at how Fluentd can meet some of the claims made in Chapter 1, such as the value of getting important events to notification mechanisms rather than waiting for events to be aggregated and periodically analyzed.
This chapter will continue to make use of the Log Simulator and we will also make use of a couple of other tools such as MongoDB and Slack, and as before there, complete configurations are available in the download pack from Manning or via the GitHub repository, allowing us to focus on the configuration of the relevant plugin(s).