Skip to main content

Java Traces with Kafka using OpenTelemetry

Manual configuration

This integration includes:

  • Downloading the OpenTelemetry Java agent to your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Establishing communication between the agent and collector

On deployment, the Java agent automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Setup auto-instrumentation for your Java application using Docker and send traces to Logz.io

This integration enables you to auto-instrument your Java application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.

Before you begin, you'll need:

  • A Java application without instrumentation
  • An active Logz.io account
  • Port 4317 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.

Download Java agent

Download the latest version of the OpenTelemetry Java agent to the host of your Java application.

Pull the Docker image for the OpenTelemetry collector

docker pull otel/opentelemetry-collector-contrib:0.78.0

Create a configuration file

Create a file config.yaml with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"


exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Tail Sampling

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Here is an example configuration file:

receivers:  
otlp:
protocols:
grpc:
http:

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Run the container

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Attach the agent to the runtime and run it

note

When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.

Run the following command from the directory of your Java application:

note

If you produce and consume Kafka topics/messages from different applications, the Java agent command must be used with both applications in order to provide a full trace.

java -javaagent:<path/to>/opentelemetry-javaagent.jar \
-Dotel.traces.exporter=otlp \
-Dotel.metrics.exporter=none \
-Dotel.resource.attributes=service.name=<YOUR-SERVICE-NAME> \
-Dotel.exporter.otlp.endpoint=http://localhost:4317 \
-Dotel.instrumentation.messaging.experimental.receive-telemetry.enabled=true
-jar target/*.jar
  • Replace <path/to> with the path to the directory where you downloaded the agent.
  • Replace <YOUR-SERVICE-NAME> with the name of your tracing service defined earlier.

Kafka OpenTelemery Java Applications

There are some pre-written Java applications that insturment Kafka clients with OpenTelemetry.

You can download the relevant applications from the kafka-opentelemetry GitHub repository and compile the JAR file using the mvn install command.

Kafka Consumer Example

Run the OpenTelemetry Java Agent insturmated with the Kafka consumer application in the following way:

java -javaagent:<path/to>/opentelemetry-javaagent.jar \
-Dotel.traces.exporter=otlp \
-Dotel.metrics.exporter=none \
-Dotel.resource.attributes=service.name=<YOUR-CONSUMER-NAME> \
-Dotel.exporter.otlp.endpoint=http://localhost:4317 \
-Dotel.instrumentation.messaging.experimental.receive-telemetry.enabled=true \
-jar kafka-consumer-agent/target/kafka-consumer-agent-1.0-SNAPSHOT-jar-with-dependencies.jar

Kafka Producer Example

Run the OpenTelemetry Java Agent insturmated with the Kafka producer application in the following way:

java -javaagent:<path/to>/opentelemetry-javaagent.jar \
-Dotel.traces.exporter=otlp \
-Dotel.metrics.exporter=none \
-Dotel.resource.attributes=service.name=<YOUR-PRODUCER-NAME> \
-Dotel.exporter.otlp.endpoint=http://localhost:4317 \
-Dotel.instrumentation.messaging.experimental.receive-telemetry.enabled=true \
-jar kafka-producer-agent/target/kafka-producer-agent-1.0-SNAPSHOT-jar-with-dependencies.jar

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Controlling the number of spans

To limit the number of outgoing spans, you can use the sampling option in the Java agent.

The sampler configures whether spans will be recorded for any call to SpanBuilder.startSpan.

System propertyEnvironment variableDescription
otel.traces.samplerOTEL_TRACES_SAMPLERThe sampler to use for tracing. Defaults to parentbased_always_on
otel.traces.sampler.argOTEL_TRACES_SAMPLER_ARGAn argument to the configured tracer if supported, for example a ratio.

Supported values for otel.traces.sampler are

  • "always_on": AlwaysOnSampler
  • "always_off": AlwaysOffSampler
  • "traceidratio": TraceIdRatioBased. otel.traces.sampler.arg sets the ratio.
  • "parentbased_always_on": ParentBased(root=AlwaysOnSampler)
  • "parentbased_always_off": ParentBased(root=AlwaysOffSampler)
  • "parentbased_traceidratio": ParentBased(root=TraceIdRatioBased). otel.traces.sampler.arg sets the ratio.

Configuration via Helm

You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of preconfigured Kubernetes resources that use charts.

logzio-monitoring allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.

Deploy the Helm chart

Add logzio-helm repo as follows:

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update

Run the Helm deployment code

helm install -n monitoring \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="Replace `<<TRACING-SHIPPING-TOKEN>>` with the [token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing) of the account you want to ship to.

Replace `<LOGZIO_ACCOUNT_REGION_CODE>` with the applicable [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions).

" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO-REGION>>" \
--set logzio-k8s-telemetry.secrets.env_id="<<CLUSTER-NAME>>" \
logzio-monitoring logzio-helm/logzio-monitoring
ParameterDescription
<<TRACES-SHIPPING-TOKEN>>Your traces shipping token.
<<CLUSTER-NAME>>The cluster's name, to easily identify the telemetry data for each environment.
<<LISTENER-HOST>>Your account's listener host.
<<LOGZIO-REGION>>Name of your Logz.io traces region e.g us, eu...

Define the logzio-monitoring dns name

In most cases, the dns name will be logzio-k8s-telemetry.<<namespace>>.svc.cluster.local, where <<namespace>> is the namespace where you deployed the helm chart and svc.cluster.name is your cluster domain name.

If you are not sure what your cluster domain name is, you can run the following command to look it up:

kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.<<namespace>> | grep Name | sed "s/Name:\skubernetes.<<namespace>>//"'

It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.

Download Java agent

Download the latest version of the OpenTelemetry Java agent to the host of your Java application.

Attach the agent to your java application

note

If you produce and consume Kafka topics/messages from different applications, the Java agent command must be used with both applications in order to provide a full trace.

Add the following command to your Java application Dockerfile or equivalent:

java -javaagent:<path/to>/opentelemetry-javaagent-all.jar \
-Dotel.traces.exporter=otlp \
-Dotel.metrics.exporter=none \
-Dotel.resource.attributes=service.name=<<YOUR-SERVICE-NAME>> \
-Dotel.exporter.otlp.endpoint=http://<<logzio-monitoring-service-dns>>:4317 \
-Dotel.instrumentation.messaging.experimental.receive-telemetry.enabled=true
-jar target/*.jar
  • Replace <<path/to>> with the path to the directory where you downloaded the agent.
  • Replace <<YOUR-SERVICE-NAME>> with a name for your service under which it will appear in Logz.io Jaeger UI.
  • Replace <<logzio-monitoring-service-dns>> with the OpenTelemetry collector service dns obtained previously (service IP is also allowed here).

Check Logz.io for your traces

Give your traces some time to get from your system to ours, then open Logz.io.

Customizing Helm chart parameters

Configure customization options

You can use the following options to update the Helm chart parameters:

  • Specify parameters using the --set key=value[,key=value] argument to helm install.

  • Edit the values.yaml.

  • Override default values with your own my_values.yaml and apply it in the helm install command.

If required, you can add the following optional parameters as environment variables:

ParameterDescription
secrets.SamplingLatencyThreshold for the span latency - all traces slower than the threshold value will be filtered in. Default 500.
secrets.SamplingProbabilitySampling percentage for the probabilistic policy. Default 10.
Example

You can run the logzio-monitoring chart with your custom configuration file that takes precedence over the values.yaml of the chart.

For example:

note

The collector will sample ALL traces where is some span with error with this example configuration.

logzio-k8s-telemetry:
tracesConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]
helm install -f <PATH-TO>/my_values.yaml -n monitoring \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=Replace `<<TRACING-SHIPPING-TOKEN>>` with the [token](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=tracing) of the account you want to ship to.

Replace `<LOGZIO_ACCOUNT_REGION_CODE>` with the applicable [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions).

\
--set traces.enabled=true \
logzio-monitoring logzio-helm/logzio-monitoring

Replace <PATH-TO> with the path to your custom values.yaml file.

Uninstalling the Chart

The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.

To uninstall the logzio-monitoring deployment, use the following command:

helm uninstall logzio-monitoring

Troubleshooting

If traces are not being sent despite instrumentation, follow these steps:

Collector not installed

The OpenTelemetry collector may not be installed on your system.

Suggested remedy

Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.

Collector path not configured

The collector may not have the correct endpoint configured for the receiver.

Suggested remedy

  1. Verify the configuration file lists the following endpoints:

    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: "0.0.0.0:4317"
    http:
    endpoint: "0.0.0.0:4318"
  2. Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.

Traces not generated

The instrumentation code may be incorrect even if the collector and endpoints are properly configured.

Suggested remedy

  1. Check if the instrumentation can output traces to a console exporter.
  2. Use a web-hook to check if the traces are going to the output.
  3. Check the metrics endpoint (http://<<COLLECTOR-HOST>>:8888/metrics) to see spans received and sent. Replace <<COLLECTOR-HOST>> with your collector's address.

If issues persist, refer to Logz.io's integrations hub and re-instrument the application.

Wrong exporter/protocol/endpoint

Incorrect exporter, protocol, or endpoint configuration.

The correct endpoints are:

   receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"

Suggested remedy

  1. Activate debug logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"

Debug logs indicate the status code of the http/https post request.

If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.

A successful post request will log status code 200; failure reasons will also be logged.

Collector failure

The collector may fail to generate traces despite sending debug logs.

Suggested remedy

  1. On Linux and MacOS, view collector logs:

    journalctl | grep otelcol

    To only see errors:

    journalctl | grep otelcol | grep Error
  2. Otherwise, navigate to the following URL - http://localhost:8888/metrics

This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.

Exporter failure

The exporter configuration may be incorrect, causing trace export issues.

Suggested remedy

If you are unable to export traces to a destination, this may be caused by the following:

  • There is a network configuration issue.
  • The exporter configuration is incorrect.
  • The destination is unavailable.

To investigate this issue:

  1. Make sure that the exporters and service: pipelines are configured correctly.
  2. Check the collector logs and zpages for potential issues.
  3. Check your network configuration, such as firewall, DNS, or proxy.

Metrics like the following can provide insights:

# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.

# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0

Receiver failure

The receiver may not be configured correctly.

Suggested remedy

If you are unable to receive data, this may be caused by the following:

  • There is a network configuration issue.
  • The receiver configuration is incorrect.
  • The receiver is defined in the receivers section, but not enabled in any pipelines.
  • The client configuration is incorrect.

Metrics for receivers can help diagnose issues:

# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.

# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34


# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.

# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0