Skip to main content

Jaeger

Manual configuration

Logz.io recommends that you use OpenTelemetry to gather trace transaction data from your system. Because of its versatility, OpenTelemetry has been widely adopted as the industry standard: OpenTelemetry can be equipped with many additional functionalities, one of which is collecting aggregated trace data. Beyond that, OpenTelemetry is set to be the best production-ready solution going forward.

This integration includes:

  • Installing the OpenTelemetry collector with Logz.io exporter on your application host
  • Configuring the collector to receive traces from your Jaeger installation and send them to Logz.io

On deployment, your Jaeger instrumentation captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Set up your locally hosted Jaeger installation to send traces to Logz.io

Before you begin, you'll need:

  • An application instrumented with Jaeger
  • An active Logz.io account

Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your application and download the OpenTelemetry collector that is relevant to the operating system of your host.

note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
jaeger:
protocols:
thrift_compact:
endpoint: "0.0.0.0:6831"
thrift_binary:
endpoint: "0.0.0.0:6832"
grpc:
endpoint: "0.0.0.0:14250"
thrift_http:
endpoint: "0.0.0.0:14268"



exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE_PREFIX>>

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]


extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [jaeger]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE_PREFIX> with the prefix of the applicable region code. For example, us for us-east-1.

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

Start the collector

Run the following command:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.

Run the application

Run the application to generate traces.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Troubleshooting

Troubleshooting

If traces are not being sent despite instrumentation, follow these steps:

Collector not installed

The OpenTelemetry collector may not be installed on your system.

Suggested remedy

Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.

Collector path not configured

The collector may not have the correct endpoint configured for the receiver.

Suggested remedy

  1. Verify the configuration file lists the following endpoints:

    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: "0.0.0.0:4317"
    http:
    endpoint: "0.0.0.0:4318"
  2. Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.

Traces not generated

The instrumentation code may be incorrect even if the collector and endpoints are properly configured.

Suggested remedy

  1. Check if the instrumentation can output traces to a console exporter.
  2. Use a web-hook to check if the traces are going to the output.
  3. Check the metrics endpoint (http://<<COLLECTOR-HOST>>:8888/metrics) to see spans received and sent. Replace <<COLLECTOR-HOST>> with your collector's address.

If issues persist, refer to Logz.io's integrations hub and re-instrument the application.

Wrong exporter/protocol/endpoint

Incorrect exporter, protocol, or endpoint configuration.

The correct endpoints are:

   receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"

Suggested remedy

  1. Activate debug logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"

Debug logs indicate the status code of the http/https post request.

If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.

A successful post request will log status code 200; failure reasons will also be logged.

Collector failure

The collector may fail to generate traces despite sending debug logs.

Suggested remedy

  1. On Linux and MacOS, view collector logs:

    journalctl | grep otelcol

    To only see errors:

    journalctl | grep otelcol | grep Error
  2. Otherwise, navigate to the following URL - http://localhost:8888/metrics

This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.

Exporter failure

The exporter configuration may be incorrect, causing trace export issues.

Suggested remedy

If you are unable to export traces to a destination, this may be caused by the following:

  • There is a network configuration issue.
  • The exporter configuration is incorrect.
  • The destination is unavailable.

To investigate this issue:

  1. Make sure that the exporters and service: pipelines are configured correctly.
  2. Check the collector logs and zpages for potential issues.
  3. Check your network configuration, such as firewall, DNS, or proxy.

Metrics like the following can provide insights:

# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.

# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0

Receiver failure

The receiver may not be configured correctly.

Suggested remedy

If you are unable to receive data, this may be caused by the following:

  • There is a network configuration issue.
  • The receiver configuration is incorrect.
  • The receiver is defined in the receivers section, but not enabled in any pipelines.
  • The client configuration is incorrect.

Metrics for receivers can help diagnose issues:

# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.

# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34


# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.

# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0