Skip to main content

Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)


Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.

This implementation uses a Filebeat DaemonSet to collect Kubernetes logs from your OKE cluster and ship them to

You have 3 options for deploying this Daemonset:

  • Standard configuration
  • Autodiscover configuration - The standard configuration which also uses Filebeat's autodiscover and hints system. Learn more about Autodiscover in our blog 🔗 and webinar 🎥.
  • Custom configuration - Upload a Daemonset with your own configuration.
  • If you are sending multiline logs, see the relevant tab for further details.

Deploy Filebeat as a DaemonSet on Kubernetes

Before you begin, you'll need: destination port 5015 open on your firewall for outgoing traffic.

Store your credentials

Save your shipping credentials as a Kubernetes secret. Customize the sample command below to your specifics before running it.

kubectl create secret generic logzio-logs-secret \
--from-literal=logzio-logs-shipping-token='<<LOG-SHIPPING-TOKEN>>' \
--from-literal=logzio-logs-listener='<<LISTENER-HOST>>' \
--from-literal=cluster-name='<<CLUSTER-NAME>>' \
-n kube-system

Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>):

  • Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.

  • Replace <<LISTENER-HOST>> with the host for your region. For example, if your account is hosted on AWS US East, or if hosted on Azure West Europe.

  • Replace <<CLUSTER-NAME>> with your cluster's name.


Run the relevant command for your type of deployment.

Deploy the standard configuration
kubectl apply -f -f
Deploy the standard configuration with Filebeat autodiscover enabled

Autodiscover allows you to adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running. See Elastic documentation to learn more about Filebeat Autodiscover.

kubectl apply -f -f
Deploy a custom configuration

If you want to apply your own custom configuration, download the standard configmap.yaml file from the GitHub repo and apply your changes. Make sure to keep the file structure unchanged.

Run the following command to download the file:


Apply your custom configuration to the parameters under filebeat.yml and only there. The filebeat.yml field contains a basic Filebeat configuration. You should not change the 'output' field (indicated in the example below). See Elastic documentation to learn more about Filebeat configuration options.

Note that the parameter token: ${LOGZIO_LOGS_SHIPPING_TOKEN} under fields determines the token used to verify your account. It is required.


Filebeat requires a file extension specified for the log input.

filebeat.yml: |-

# ...
# Start editing your configuration here
- type: container
- /var/log/containers/*.log
- add_kubernetes_metadata:
host: ${NODE_NAME}
- logs_path:
logs_path: "/var/log/containers/"

- add_cloud_metadata: ~
# ...
# Do not edit anything beyond this point. (Do not change 'fields' and 'output'.)

logzio_codec: ${LOGZIO_CODEC}
cluster: ${CLUSTER_NAME}
type: ${LOGZIO_TYPE}
fields_under_root: true
ignore_older: ${IGNORE_OLDER}
hosts: ["${LOGZIO_LOGS_LISTENER_HOST}:5015"]
certificate_authorities: ['/etc/pki/tls/certs/SectigoRSADomainValidationSecureServerCA.crt']

Run the following to deploy your custom Filebeat configuration:

kubectl apply -f -f <<Your-custom-configuration-file.yaml>>

Check for your logs

Give your logs some time to get from your system to ours, and then open Open Search Dashboards.

If you still don't see your logs, see Kubernetes log shipping troubleshooting.

Configuring Filebeat to concatenate multiline logs

Filebeat splits multiline logs by default. If your original logs span multiple lines, you may find that they arrive in your account split into several partial logs.

Filebeat offers configuration options that can be used to concatenate multiline logs. The configuration is managed differently, depending on your deployment method:

  • Standard configuration: If you are using a standard configuration (but not autodiscover), use an explicit configuration. Configuration options from Filebeat's official documentation.

    When using an explicit configuration, you will need to create a single regex expression that covers all of your pods. It also means that Filebeat will need to be reconfigured more often, with the introduction of every new use case.

  • Autodiscover configuration: If you are using autodiscover hints & annotations, add an annotation to your deployment. Configuration options from Filebeat's official documentation.

    Hints and annotations support the option to manage regex expressions separately for each component. This greatly simplifies the process, making it possible to add a dedicated regex expression to each pod, without needing to change anything on Filebeat itself.


The following is an example of a multiline log sent from a deployment on a k8s cluster:

2021-02-08 09:37:51,031 - errorLogger - ERROR - Traceback (most recent call last):
File "./", line 25, in my_func
ZeroDivisionError: division by zero

Filebeat's default configuration will split the above log into 4 logs, 1 for each line of the original log. In other words, each line break (\n) causes a split.

You can overcome this behavior by configuring Filebeat to meet your needs.

Example of an explicit configuration for concatenating multiline logs

To add an explicit configuration to your Filebeat, edit your filebeat.yml file in a text editor and make the appropriate changes under the filebeat.input section.

For the above example, we could use the following regex expression to demarcate the start of our example log. This configuration example is set to identify the first log in a multiline log and concatenate the log lines that follow until it identifies the next log that matches the regex expression. In other words, there is no explicit regex expression to demarcate the end of a multiline log.

- type: container
- /var/log/containers/*.log
- add_kubernetes_metadata:
host: ${NODE_NAME}
- logs_path:
logs_path: "/var/log/containers/"
multiline.type: pattern
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after

See Filebeat's official documentation for additional configuration options.

Example for using hints & annotations to concatenate multiline logs

If you're using Filebeat autodiscover hints, you can use annotations to identify multiline logs and concatenate them.

You will need to first configure Filebeat to enable the hints system, and add annotations to the relevant components when you deploy them to your cluster.

Enable Filebeat's hints system

First, enable Filebeat's hints system. In your filebeat.yml file, set hints.enabled: true under the filebeat.autodiscover section. For example:

- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true # This part enables the hints
type: container
- /var/log/containers/*-${}.log
Add multiline annotations to your deployment

Whenever you plan to deploy a component to your cluster and want the hints system to detect the multiline logs, you'll need to add multiline annotations.

For the above log example, you can add the following annotations to your deployment:

co.elastic.logs/multiline.type: 'pattern'
co.elastic.logs/multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
co.elastic.logs/multiline.negate: 'true'
co.elastic.logs/multiline.match: 'after'

The above configuration ensures that Filebeat will look for log lines that match the regex under multiline.pattern and concatenate all subsequent lines, until it reaches the next regex match.

See Filebeat's official documentation for additional configuration options.