Skip to main content

OpenTelemetry

Logs

This project lets you configure the OpenTelemetry collector to send your collected logs to Logz.io.

Configuring OpenTelemetry to send your log data to Logz.io

Download OpenTelemetry collector

note

If you already have OpenTelemetry, proceed to the next step.

Create a dedicated directory on your host and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml.

Configure the Receivers

Open the configuration file and ensure it contains the receivers required to collect your logs.

Configure the Exporters

In the same configuration file, add the following to the exporters section:

exporters:
logzio/logs:
account_token: <<LOGS-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>

Configure the Service Pipeline

In the service section of the configuration file, add the following configuration:

service:
pipelines:
logs:
receivers: [<<YOUR-LOG-RECEIVER>>]
exporters: [logzio/logs]
  • Replace <<YOUR-LOG-RECEIVER>> with the name of your log receiver.

Start the Collector

Run the following command:

<path/to>/otelcol-contrib --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust the name in the command accordingly.

Check Logz.io for Your Logs

Give your data some time to get from your system to ours, then log in to your Logz.io account, and open the appropriate tab or dashboard to view your logs.

Metrics

This project lets you configure the OpenTelemetry collector to send your collected Prometheus-format metrics to Logz.io.

Configuring OpenTelemetry to send your metrics data to Logz.io

Download OpenTelemetry collector
note

If you already have OpenTelemetry, proceed to the next step.

Create a dedicated directory on your host and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml.

Configure the receivers

Open the configuration file and make sure that it states the receivers required for your source.

Configure the exporters

In the same configuration file, add the following to the exporters section:

exporters:
prometheusremotewrite:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
target_info:
enabled: false

Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>):

  • Replace <<LISTENER-HOST>> with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe.
  • Replace <<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account you want to ship to.
    Here's how to look up your Metrics token.
Configure the service pipeline

In the service section of the configuration file, add the following configuration

service:
pipelines:
metrics:
receivers: [<<YOUR-RECEIVER>>]
exporters: [prometheusremotewrite]
  • Replace <<YOUR_RECEIVER>> with the name of your receiver.
Start the collector

Run the following command:

<path/to>/otelcol-contrib --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust name in the command accordingly.
Check Logz.io for your metrics

Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Traces

Deploy this integration to send traces from your OpenTelemetry installation to Logz.io.

Manual configuration

This integration includes:

  • Configuring the OpenTelemetry collector to receive traces from your OpenTelemetry installation and send them to Logz.io

On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Set up your locally hosted OpenTelemetry installation to send traces to Logz.io

Before you begin, you'll need:

  • An active account with Logz.io
note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

An example configuration file looks as follows:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Instrument the application

If your application is not yet instrumented, instrument the code as described in our tracing documents.

Start the collector

Run the following command:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.

Run the application

Run the application to generate traces.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Set up your OpenTelemetry installation using containerized collector to send traces to Logz.io

Before you begin, you'll need:

  • An active account with Logz.io

Instrument the application

If your application is not yet instrumented, instrument the code as described in our tracing documents.

Pull the Docker image for the OpenTelemetry collector

docker pull otel/opentelemetry-collector-contrib:0.78.0

Create a configuration file

Create a file config.yaml with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"


exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Tail Sampling

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

An example configuration file looks as follows:

receivers:  
otlp:
protocols:
grpc:
http:

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10
Run the container

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Run the application

note

Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.

Run the application to generate traces.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Troubleshooting

This section contains some guidelines for handling errors that you may encounter when trying to collect traces with OpenTelemetry.

Problem: No traces are sent

The code has been instrumented, but the traces are not being sent.

Possible cause - Collector not installed

The OpenTelemetry collector may not be installed on your system.

Suggested remedy

Check if you have an OpenTelemetry collector installed and configured to receive traces from your hosts.

Possible cause - Collector path not configured

If the collector is installed, it may not have the correct endpoint configured for the receiver.

Suggested remedy
  1. Check that the configuration file of the collector lists the following endpoints:

    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: "0.0.0.0:4317"
    http:
    endpoint: "0.0.0.0:4318"
  2. In the instrumentation code, make sure that the endpoint is specified correctly. You can use Logz.io's integrations hub to ship your data.

Possible cause - Traces not genereated

If the collector is installed and the endpoints are properly configured, the instrumentation code may be incorrect.

Suggested remedy
  1. Check if the instrumentation can output traces to a console exporter.
  2. Use a web-hook to check if the traces are going to the output.
  3. Use the metrics endpoint of the collector (http://<<COLLECTOR-HOST>>:8888/metrics) to see the number of spans received per receiver and the number of spans sent to the Logz.io exporter.
  • Replace <<COLLECTOR-HOST>> with the address of your collector host, e.g. localhost, if the collector is hosted locally.

If the above steps do not work, refer to Logz.io's integrations hub and re-instrument the application.

Possible cause - Wrong exporter/protocol/endpoint

If traces are generated but not send, the collector may be using incorrect exporter, protocol and/or endpoint.

The correct endpoints are:

   receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"
Suggested remedy
  1. Activate debug logs in the configuration file of the collector as follows:

    service:
    telemetry:
    logs:
    level: "debug"

Debug logs indicate the status code of the http/https post request.

If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.

If the post request is successful, there will be an additional log with the status code 200. If the post request failed for some reason, there would be another log with the reason for the failure.

Possible cause - Collector failure

If the debug logs are sent, but the traces are still not generated, the collector logs need to be investigated.

Suggested remedy
  1. On Linux and MacOS, see the logs for the collector:

    journalctl | grep otelcol

    To only see errors:

    journalctl | grep otelcol | grep Error
  2. Otherwise, navigate to the following URL - http://localhost:8888/metrics

This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.

Possible cause - Exporter failure

Traces may not be generated if the exporter is not configured properly.

Suggested remedy

If you are unable to export traces to a destination, this may be caused by the following:

  • There is a network configuration issue
  • The exporter configuration is incorrect
  • The destination is unavailable

To investigate this issue:

  1. Make sure that the exporters and service: pipelines are configured correctly.
  2. Check the collector logs as well as zpages for potential issues.
  3. Check your network configuration, such as firewall, DNS, or proxy.

For example, those metrics can provide information about the exporter:

# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.

# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0
Possible cause - Receiver failure

Traces may not be generated if the receiver is not configured properly.

Suggested remedy

If you are unable to receive data, this may be caused by the following:

  • There is a network configuration issue
  • The receiver configuration is incorrect
  • The receiver is defined in the receivers section, but not enabled in any pipelines
  • The client configuration is incorrect

Those metrics can provide about the receiver:

# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.

# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34


# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.

# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0