This integration is a Docker Swarm container that uses Filebeat to collect logs from other Docker containers and forward them to your Logz.io account.
To use docker-collector-logs, you’ll set environment variables when you run the container. The Docker logs directory and docker.sock are mounted to the container, allowing Filebeat to collect the logs and metadata.
Upgrading to a newer version
Upgrading to a newer version of docker-collector-logs while it is already running will cause it to resend logs that are within the
ignoreOldertimeframe. You can minimize log duplicates by setting the
ignoreOlderparameter of the new docker to a lower value (for example,
Version 0.1.0 of docker-collector-logs includes breaking changes. Please see the project’s change log for further information.
Deploy the Docker Swarm collector
Pull the Docker image
Download the logzio/docker-collector-logs image.
docker pull logzio/docker-collector-logs
Run the Docker image
For a complete list of options, see the parameters below the code block.👇
docker service create --name docker-collector-logs \ --env LOGZIO_TOKEN="<<LOG-SHIPPING-TOKEN>>" \ --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \ --mount type=bind,source=/var/lib/docker/containers,target=/var/lib/docker/containers \ --mode global logzio/docker-collector-logs
|LOGZIO_TOKEN||Your Logz.io account token. Replace
|LOGZIO_REGION||Logz.io region code to ship the logs to. This region code changes depending on the region your account is hosted in. For example, accounts in the EU region have region code
|LOGZIO_TYPE||The log type you’ll use with this Docker. Declare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can’t contain spaces.||Docker image name|
|ignoreOlder||Set a time limit on back shipping logs. Upgrading to a newer version of docker-collector-logs while it is already running will cause it to resend logs that are within the
|additionalFields||Include additional fields with every message sent, formatted as
|matchContainerName||Comma-separated list of containers you want to collect the logs from. If a container’s name partially matches a name on the list, that container’s logs are shipped. Otherwise, its logs are ignored. Note: Can’t be used with skipContainerName||--|
|skipContainerName||Comma-separated list of containers you want to ignore. If a container’s name partially matches a name on the list, that container’s logs are ignored. Otherwise, its logs are shipped. Note: Can’t be used with matchContainerName||--|
|includeLines||Comma-separated list of regular expressions to match the lines that you want to include. Note: Regular expressions in this list should not contain commas.||--|
|excludeLines||Comma-separated list of regular expressions to match the lines that you want to exclude. Note: Regular expressions in this list should not contain commas.||--|
|renameFields||Rename fields with every message sent, formatted as
|HOSTNAME||Include your host name to display it for the field
|multilinePattern||Include your regex pattern. See Filebeat’s official documentation for more information.||
|multilineMatch||Specifies how Filebeat combines matching lines into an event. The settings are
By default, logs from docker-collector-logs and docker-collector-metrics containers are ignored.
Check Logz.io for your logs
Spin up your Docker containers if you haven’t done so already. Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.