AWS EKS
The logzio-monitoring Helm Chart ships your EKS Fargate telemetry (logs, metrics, traces and security reports) to your Logz.io account.
Prerequisites
- Add Logzio-helm repository
helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update
Send All Telemetry Data (logs, metrics, traces and security reports)
Send all of your telemetry data using one single Helm chart:
helm install -n monitoring --create-namespace \
--set logs.enabled=true \
--set logzio-logs-collector.secrets.logzioLogsToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-logs-collector.secrets.logzioRegion="<<LOGZIO-REGION>>" \
--set logzio-logs-collector.secrets.env_id="<<CLUSTER-NAME>>" \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.metrics.enabled=true \
--set logzio-k8s-telemetry.secrets.MetricsToken="<<METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.ListenerHost="https://<<LISTENER-HOST>>:8053" \
--set logzio-k8s-telemetry.secrets.p8s_logzio_name="<<ENV-ID>>" \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="<<TRACING-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO-REGION>>" \
--set logzio-k8s-telemetry.spm.enabled=true \
--set logzio-k8s-telemetry.secrets.env_id="<<ENV-ID>>" \
--set logzio-k8s-telemetry.secrets.SpmToken="<<SPM-ACCOUNT-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.serviceGraph.enabled=true \
--set logzio-k8s-telemetry.k8sObjectsConfig.enabled=true \
--set logzio-k8s-telemetry.secrets.k8sObjectsLogsToken="<<LOG-SHIPPING-TOKEN>>" \
--set securityReport.enabled=true \
--set logzio-trivy.env_id="<<ENV-ID>>" \
--set logzio-trivy.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-trivy.secrets.logzioListener="<<LISTENER-HOST>>" \
--set deployEvents.enabled=true \
--set logzio-k8s-events.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-k8s-events.secrets.logzioListener="<<LISTENER-HOST>>" \
--set logzio-k8s-events.secrets.env_id="<<ENV-ID>>" \
logzio-monitoring logzio-helm/logzio-monitoring
Parameter | Description |
---|---|
<<LOG-SHIPPING-TOKEN>> | Your logs shipping token. |
<<LISTENER-HOST>> | Your account's listener host. |
<<METRICS-SHIPPING-TOKEN>> | Your metrics shipping token. |
<<SPM-ACCOUNT-SHIPPING-TOKEN>> | Your SPM account shipping token |
<<ENV-ID>> | The name for your environment's identifier, to easily identify the telemetry data for each environment. |
<<TRACING-SHIPPING-TOKEN>> | Your traces shipping token. |
<<LOGZIO-REGION>> | Your Logz.io region code |
Send your logs
Send your logs
helm install -n monitoring \
--set logs.enabled=true \
--set logzio-fluentd.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-fluentd.secrets.logzioListener="<<LISTENER-HOST>>" \
--set logzio-fluentd.env_id="<<CLUSTER-NAME>>" \
--set logzio-fluentd.fargateLogRouter.enabled=true \
logzio-monitoring logzio-helm/logzio-monitoring
Parameter | Description |
---|---|
<<LOG-SHIPPING-TOKEN>> | Your logs shipping token. |
<<LISTENER-HOST>> | Your account's listener host. |
<<CLUSTER-NAME>> | The cluster's name, to easily identify the telemetry data for each environment. |
Adding a custom log_type field from attribute
To add a log_type
field with a custom value to each log, you can use the annotation key log_type
with a custom value. The annotation will be automatically parsed into a log_type
field with the provided value.
e.g:
...
metadata:
annotations:
log_type: "my_type"
Will result with the following log (json):
{
...
,"log_type": "my_type"
...
}
Configuring Fluentd to concatenate multiline logs using a plugin
Fluentd splits multiline logs by default. If your original logs span multiple lines, you may find that they arrive in your Logz.io account split into several partial logs.
The Logz.io Docker image comes with a pre-built Fluentd filter plug-in that can be used to concatenate multiline logs. The plug-in is named fluent-plugin-concat
and you can view the full list of configuration options in the GitHub project.
Example
The following is an example of a multiline log sent from a deployment on a k8s cluster:
2021-02-08 09:37:51,031 - errorLogger - ERROR - Traceback (most recent call last):
File "./code.py", line 25, in my_func
1/0
ZeroDivisionError: division by zero
Fluentd's default configuration will split the above log into 4 logs, 1 for each line of the original log. In other words, each line break (\n
) causes a split.
To avoid this, you can use the fluent-plugin-concat
and customize the configuration to meet your needs. The additional configuration is added to:
kubernetes.conf
for RBAC/non-RBAC DaemonSetkubernetes-containerd.conf
for Containerd DaemonSet
For the above example, we could use the following regex expressions to demarcate the start and end of our example log:
<filter **>
@type concat
key message # The key for part of multiline log
multiline_start_regexp /^[0-9]{4}-[0-9]{2}-[0-9]{2}/ # This regex expression identifies line starts.
</filter>
Monitoring fluentd with prometheus
In order to monitor fluentd and collect input & output metrics. You can
enable prometheus configuration with the logzio-fluentd.daemonset.fluentdPrometheusConf
and logzio-fluentd.windowsDaemonset.fluentdPrometheusConf
parameter (default to false).
When enabling Prometheus configuration, the pod collects and exposes fluentd metrics on port 24231
, /metrics
endpoint.
Modifying the configuration
You can see a full list of the possible configuration values in the logzio-fluentd Chart folder.
If you would like to modify any of the values found in the logzio-fluentd
folder, use the --set
flag with the logzio-fluentd
prefix.
For instance, if there is a parameter called someField
in the logzio-telemetry
's values.yaml
file, you can set it by adding the following to the helm install
command:
--set logzio-fluentd.someField="my new value"
You can add log_type
annotation with a custom value, which will be parsed into a log_type
field with the same value.
Sending logs from nodes with taints
If you want to ship logs from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows:
tolerations:
- key:
operator:
value:
effect:
To determine if a node uses taints as well as to display the taint keys, run:
kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
For example:
--set logzio-fluentd.daemonset.tolerations[0].key=node-role.kubernetes.io/master --set logzio-fluentd.daemonset.tolerations[0].effect=NoSchedule
:::node
You need to use Helm
client with version v3.9.0
or above.
:::
Encounter an issue? See our user guide.
Send your Metrics
helm install -n monitoring \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.metrics.enabled=true \
--set logzio-k8s-telemetry.secrets.MetricsToken="<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.ListenerHost="<<LISTENER-HOST>>" \
--set logzio-k8s-telemetry.secrets.p8s_logzio_name="<<CLUSTER-NAME>>" \
--set logzio-k8s-telemetry.secrets.env_id="<<CLUSTER-NAME>>" \
--set logzio-k8s-telemetry.collector.mode=standalone \
logzio-monitoring logzio-helm/logzio-monitoring
Parameter | Description |
---|---|
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>> | Your metrics shipping token. |
<<P8S-LOGZIO-NAME>> | The name for the environment's metrics, to easily identify the metrics for each environment. |
<<CLUSTER-NAME>> | The cluster's name, to easily identify the telemetry data for each environment. |
<<LISTENER-HOST>> | Your account's listener host. |
Encounter an issue? See our user guide.
Customize the metrics collected by the Helm chart
The default configuration uses the Prometheus receiver with the following scrape jobs:
- Cadvisor: Scrapes container metrics
- Kubernetes service endpoints: These jobs scrape metrics from the node exporters, from Kube state metrics, from any other service for which the
prometheus.io/scrape: true
annotaion is set, and from services that expose Prometheus metrics at the/metrics
endpoint.
To customize your configuration, edit the config
section in the values.yaml
file.
Check Logz.io for your metrics
Give your metrics some time to get from your system to ours.
Install the pre-built dashboard to enhance the observability of your metrics.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Encounter an issue? See our EKS troubleshooting guide.
Send your traces
helm install -n monitoring \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="<<TRACES-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO-REGION>>" \
--set logzio-k8s-telemetry.secrets.env_id="<<CLUSTER-NAME>>" \
logzio-monitoring logzio-helm/logzio-monitoring
Parameter | Description |
---|---|
<<TRACES-SHIPPING-TOKEN>> | Your traces shipping token. |
<<CLUSTER-NAME>> | The cluster's name, to easily identify the telemetry data for each environment. |
<<LISTENER-HOST>> | Your account's listener host. |
<<LOGZIO-REGION>> | Name of your Logz.io traces region e.g us , eu ... |
Encounter an issue? See our Distributed Tracing troubleshooting.
Send traces with SPM
helm install -n monitoring \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="<<TRACES-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO-REGION>>" \
--set logzio-k8s-telemetry.secrets.env_id="<<CLUSTER-NAME>>" \
--set logzio-k8s-telemetry.spm.enabled=true \
--set logzio-k8s-telemetry.secrets.SpmToken=Replace `<<SPM-METRICS-SHIPPING-TOKEN>>` with a [token](https://app.logz.io/#/dashboard/settings/manage-accounts) for the Metrics account that is dedicated to your Service Performance Monitoring feature.
\
logzio-monitoring logzio-helm/logzio-monitoring
Parameter | Description |
---|---|
<<TRACES-SHIPPING-TOKEN>> | Your traces shipping token. |
<<CLUSTER-NAME>> | The cluster's name, to easily identify the telemetry data for each environment. |
<<LOGZIO-REGION>> | Name of your Logz.io traces region e.g us , eu ... |
<<SPM-SHIPPING-TOKEN>> | Your span metrics shipping token. |
Modifying the configuration for metrics and traces
You can see a full list of the possible configuration values in the logzio-telemetry Chart folder.
If you would like to modify any of the values found in the logzio-telemetry
folder, use the --set
flag with the logzio-k8s-telemetry
prefix.
For instance, if there is a parameter called someField
in the logzio-telemetry
's values.yaml
file, you can set it by adding the following to the helm install
command:
--set logzio-k8s-telemetry.someField="my new value"
Scan your cluster for security vulnerabilities
helm install -n monitoring \
--set securityReport.enabled=true \
--set logzio-trivy.env_id="<<CLUSTER-NAME>>" \
--set logzio-trivy.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-trivy.secrets.logzioListener="<<LISTENER-HOST>>" \
Parameter | Description |
---|---|
<<LOG-SHIPPING-TOKEN>> | Your logs shipping token. |
<<LISTENER-HOST>> | Your account's listener host. |
<<CLUSTER-NAME>> | The cluster's name, to easily identify the telemetry data for each environment. |
Uninstalling the Chart
The uninstall
command is used to remove all the Kubernetes components associated with the chart and to delete the release.
To uninstall the logzio-k8s-telemetry
deployment, use the following command:
helm uninstall logzio-k8s-telemetry