Datadog has great monitoring abilities, including log monitoring.
In a case where your monitored resource is Kubernetes cluster, there are a couple of ways to get the logs. The first-class solution is to use Datadog agents for most of the roles.
- The agent is well compiled
- Comparing to other log delivery agents like fluentd or logstash, Datadog agents use much fewer resources for their own activity.
- Logs, metrics, traces, and rest of things that Datadog agent can transmit arrived already “right tagged” and pre-formatted, depends on agent config off cause.
- And if you are paying for Datadog – support cases with their agent should get a better response than additional integrations where 3rd party services involved to log delivery (like other agents or AWS lambda)
And why it is good to use agent pre-formatting features? Well…
- Log messages delivered in full format (multi_line)
- Excluded log patterns can save you Datadog bill and also your log dashboard will be much cleaner.
- Mask sensitive data
- Take some static log files from non-default log directories
- Here you can find a bit more processing rules types
- And here about more patterns for known integrations. Logs collection integration info usually appears at the end of the specific integration article.
- I hope this helps to implement Datadog logs collection from Kubernetes apps.
- So, here is an example of log pre-processing from the nodejs app that runs on Kubernetes. In Datadog Kubernetes daemonset there is an option to pass log processing rule for datadog in the app “annotations”.
This means that specific app log processing config defined on the app, and not on datadog agent (which is possible by default).
And this is great in cases where log parsing rule planned change only for a specific app and you don’t want to redeploy Datadog daemonset.
Don’t forget that all this only pre-processing, main parsing, and all indexing boiled in your Datadog log pipelines.
If your “source” name in logs will be the same name as one of the known integrations – you will see dynamic default pipelines for this integration and ready pipeline rules.
You can just clone them and customize them. See this example below.