Non-RBAC setup

For Kubernetes, a DaemonSet ensures that some or all nodes run a copy of a pod. This implementation is uses a Fluentd DaemonSet to collect Kubernetes logs. Fluentd is flexible enough and has the proper plugins to distribute logs to different third parties such as Logz.io.

The logzio-k8s image comes pre-configured for Fluentd to gather all logs from the Kubernetes node environment and append the proper metadata to the logs.

  1. Build your DaemonSet configuration

    Paste the sample configuration file below into a local YAML file that you’ll use to deploy the DaemonSet.

    For a complete list of options, see the environment variables below the code block.👇

    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: fluentd-logzio
      namespace: kube-system
      labels:
        k8s-app: fluentd-logzio
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      template:
        metadata:
          labels:
            k8s-app: fluentd-logzio
            version: v1
            kubernetes.io/cluster-service: "true"
        spec:
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          containers:
          - name: fluentd
            image: logzio/logzio-k8s:latest
            env:
              - name:  LOGZIO_TOKEN
                value: <<SHIPPING-TOKEN>>
              - name:  LOGZIO_URL
                value: https://<<LISTENER-HOST>>:8071
            resources:
              limits:
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
            volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
          terminationGracePeriodSeconds: 30
          volumes:
          - name: varlog
            hostPath:
              path: /var/log
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers
    

    Environment variables

    LOGZIO_TOKEN
    Your Logz.io account token. Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
    LOGZIO_URL
    Logz.io listener URL to ship the logs to. Replace <<LISTENER-HOST>> with your region’s listener host (for example, listener.logz.io). For more information on finding your account’s region, see Account region.
    FLUENTD_SYSTEMD_CONF Enabled
    If you don’t setup systemd in the container, Fluentd ships Systemd::JournalError log messages. To suppress these message, set to disable.
    output_include_time true
    To append a timestamp to your logs when they’re processed, true. Otherwise, false.
    buffer_type file
    Specifies which plugin to use as the backend.
    buffer_path /var/log/Fluentd-buffers/stackdriver.buffer
    Path of the buffer.
    buffer_queue_full_action block
    Controls the behavior when the queue becomes full.
    buffer_chunk_limit 2M
    Maximum size of a chunk allowed.
    buffer_queue_limit 6
    Maximum length of the output queue.
    flush_interval 5s
    Time to wait before invoking the next buffer flush, in seconds.
    max_retry_wait 30s
    Maximum time to wait between retries, in seconds.
    num_threads 2
    Number of threads to flush the buffer.
  2. Deploy the DaemonSet

    Run this command to deploy the DaemonSet you created in step 1.

    kubectl create -f /path/to/daemonset/yaml/file
    
  3. Check Logz.io for your logs

    Give your logs a few minutes to get from your system to ours, and then open Kibana.

    If you still don’t see your logs, see log shipping troubleshooting.

RBAC setup

For Kubernetes, a DaemonSet ensures that some or all nodes run a copy of a pod. This implementation is uses a Fluentd DaemonSet to collect Kubernetes logs. Fluentd is flexible enough and has the proper plugins to distribute logs to different third parties such as Logz.io.

The logzio-k8s image comes pre-configured for Fluentd to gather all logs from the Kubernetes node environment and append the proper metadata to the logs.

  1. Build your DaemonSet configuration

    Paste the sample configuration file below into a local YAML file that you’ll use to deploy the DaemonSet.

    For a complete list of options, see the environment variables below the code block.👇

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: fluentd
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
      name: fluentd
      namespace: kube-system
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - namespaces
      verbs:
      - get
      - list
      - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: fluentd
    roleRef:
      kind: ClusterRole
      name: fluentd
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: fluentd
      namespace: kube-system
    ---
    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: fluentd-logzio
      namespace: kube-system
      labels:
        k8s-app: fluentd-logzio
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      template:
        metadata:
          labels:
            k8s-app: fluentd-logzio
            version: v1
            kubernetes.io/cluster-service: "true"
        spec:
          serviceAccount: fluentd
          serviceAccountName: fluentd
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
          containers:
          - name: fluentd
            image: logzio/logzio-k8s:latest
            env:
              - name:  LOGZIO_TOKEN
                value: <<SHIPPING-TOKEN>>
              - name:  LOGZIO_URL
                value: https://<<LISTENER-HOST>>:8071
            resources:
              limits:
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
            volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
          terminationGracePeriodSeconds: 30
          volumes:
          - name: varlog
            hostPath:
              path: /var/log
          - name: varlibdockercontainers
            hostPath:
              path: /var/lib/docker/containers
    

    Environment variables

    LOGZIO_TOKEN
    Your Logz.io account token. Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
    LOGZIO_URL
    Logz.io listener URL to ship the logs to. Replace <<LISTENER-HOST>> with your region’s listener host (for example, listener.logz.io). For more information on finding your account’s region, see Account region.
    FLUENTD_SYSTEMD_CONF Enabled
    If you don’t setup systemd in the container, Fluentd ships Systemd::JournalError log messages. To suppress these message, set to disable.
    output_include_time true
    To append a timestamp to your logs when they’re processed, true. Otherwise, false.
    buffer_type file
    Specifies which plugin to use as the backend.
    buffer_path /var/log/Fluentd-buffers/stackdriver.buffer
    Path of the buffer.
    buffer_queue_full_action block
    Controls the behavior when the queue becomes full.
    buffer_chunk_limit 2M
    Maximum size of a chunk allowed.
    buffer_queue_limit 6
    Maximum length of the output queue.
    flush_interval 5s
    Time to wait before invoking the next buffer flush, in seconds.
    max_retry_wait 30s
    Maximum time to wait between retries, in seconds.
    num_threads 2
    Number of threads to flush the buffer.
  2. Deploy the DaemonSet

    Run this command to deploy the DaemonSet you created in step 1.

    kubectl create -f /path/to/daemonset/yaml/file
    
  3. Check Logz.io for your logs

    Give your logs a few minutes to get from your system to ours, and then open Kibana.

    If you still don’t see your logs, see log shipping troubleshooting.