concept raw log in category system administration

This is an excerpt from Manning's book Securing DevOps.
Figure 7.6 The third layer of the logging pipeline contains log consumers that process and analyze events for various purposes. In this diagram, a storage module passes raw logs to a storage layer; a monitoring module computes metrics and raises alerts to operators as needed; and a security module catches anomalies and fraud and then alerts operators.
![]()
The most basic component of the analysis layer is one that consumes raw events and writes them into a database in the storage layer. A logging pipeline should always retain raw logs for some period of time (90 days often seems to strike a reasonable compromise between retention cost and investigative needs). A consumer dedicated to this task can consume all messages sent to the broker and write them into a database or filesystem.
This is where the lifecycle of log data becomes important, because you may not need to keep logs for a long time in a costly database if reloading them on demand is easy enough. Raw logs are generally only useful to engineers for a few days after they’re generated to track issues in applications. After a week, most people look at metric aggregates, and raw logs are no longer used.
The exception to this is the security incident where investigators always want raw logs. It’s tempting to try to guess how far back investigators will expect raw logs to exist, but those numbers are usually wrong. Sometimes you’ll need the logs from the day prior to the incident, sometimes from the year prior. Instead of guessing, build a lifecycle that makes sense for your organization. For example: