OpenTelemetry
Logs
This project helps you configure the OpenTelemetry collector to send your logs to Logz.io.
Sending OpenTelemetry Logs to Logz.io
Download OpenTelemetry collector
If you already have OpenTelemetry, proceed to the next step.
Create a dedicated directory on your host and download the relevant OpenTelemetry collector for your host operating system.
Create a configuration file config.yaml
.
Configure Receivers
Ensure your config.yaml
file includes the necessary receivers to collect your logs.
Configure Exporters
Add the following to the exporters section of your config.yaml
:
exporters:
logzio/logs:
account_token: <<LOGS-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-logs
Configure Service Pipeline
In the service section of your config.yaml
, add:
service:
pipelines:
logs:
receivers: [<<YOUR-LOG-RECEIVER>>]
exporters: [logzio/logs]
- Replace
<<YOUR-LOG-RECEIVER>>
with the name of your log receiver.
Start the Collector
Run:
<path/to>/otelcol-contrib --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. If the name of your configuration file is different to config, adjust the name in the command accordingly.
View your logs
Allow some time for data ingestion, then open Logz.io to view your logs.
Metrics
This project helps you configure the OpenTelemetry collector to send your metrics to Logz.io.
Sending OpenTelemetry Metrics to Logz.io
Download OpenTelemetry collector
If you already have OpenTelemetry, proceed to the next step.
Create a dedicated directory on your host and download the relevant OpenTelemetry collector for your host operating system.
Create a configuration file config.yaml
.
Configure receivers
Ensure your config.yaml
file includes the necessary receivers for your source.
Configure exporters
Add the following to the exporters
section of your config.yaml
:
exporters:
prometheusremotewrite:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
user-agent: logzio-opentelemetry-metrics
target_info:
enabled: false
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
- Replace
<<LISTENER-HOST>>
with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic. - Replace
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to. Look up your Metrics token.
Configure service pipeline
In the service
section of your config.yaml
, add:
service:
pipelines:
metrics:
receivers: [<<YOUR-RECEIVER>>]
exporters: [prometheusremotewrite]
- Replace
<<YOUR_RECEIVER>>
with the name of your receiver.
Start the collector
Run:
<path/to>/otelcol-contrib --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. If the name of your configuration file is different toconfig
, adjust name in the command accordingly.
View your metrics
Allow some time for data ingestion, then open the Logz.io Metrics tab.
Traces
This project helps you configure the OpenTelemetry collector to send your traces to Logz.io.
Manual configuration
On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.
Before you begin, you'll need:
- An active Logz.io account
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Download and configure OpenTelemetry collector
Create a directory on your application's host and download the relevant OpenTelemetry collector for your host operating system
Create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
tail_sampling
defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.
Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 |
sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Example configuration file:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info
Optional: Configure Span Metrics Collection
Span metrics collection is disabled by default. To enable it, add & modify the following sections in the config.yaml
OpenTelemetry configuration.
Replace <<ENV-ID>>
with the envrionment for filteration in App360, <<LISTENER-HOST>>
with the Logz.io metrics listener URL and <<METRICS-SHIPPING-TOKEN>>
with your Logz.io SPM metrics account token.
connectors:
spanmetrics:
aggregation_temporality: AGGREGATION_TEMPORALITY_CUMULATIVE
dimensions:
- name: rpc.grpc.status_code
- name: http.method
- name: http.status_code
- name: cloud.provider
- name: cloud.region
- name: db.system
- name: messaging.system
- default: <<ENV-ID>>
name: env_id
dimensions_cache_size: 100000
histogram:
explicit:
buckets:
- 2ms
- 8ms
- 50ms
- 100ms
- 200ms
- 500ms
- 1s
- 5s
- 10s
metrics_expiration: 5m
resource_metrics_key_attributes:
- service.name
- telemetry.sdk.language
- telemetry.sdk.name
servicegraph:
latency_histogram_buckets: [2ms, 8ms, 50ms, 100ms, 200ms, 500ms, 1s, 5s, 10s]
dimensions:
- env_id
store:
ttl: 5s
max_items: 100000
processors:
# Rename metrics and labels to match Logz.io's requirements
metricstransform/metrics-rename:
transforms:
- include: ^duration(.*)$$
action: update
match_type: regexp
new_name: latency.$${1}
- action: update
include: calls
new_name: calls_total
metricstransform/labels-rename:
transforms:
- action: update
include: ^latency
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
- action: update
include: ^calls
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
exporters:
prometheusremotewrite/spm-logzio:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<METRICS-SHIPPING-TOKEN>> # Metrics account token for span metrics
user-agent: "logzio-opentelemetry-apm"
timeout: 30s
add_metric_suffixes: false
service:
pipelines:
extensions: [health_check, pprof, zpages]
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces, servicegraph, spanmetrics]
metrics/spm-logzio:
receivers: [spanmetrics, servicegraph]
processors: [metricstransform/metrics-rename, metricstransform/labels-rename, batch]
exporters: [prometheusremotewrite/spm-logzio]
Instrument the application
If your application isn't instrumented, begin by downloading the OpenTelemetry agent or library specific to your programming language. Logz.io supports popular open-source instrumentation libraries, including OpenTracing, Jaeger, OpenTelemetry, and Zipkin. Attach the agent, set up the necessary configuration options, and start your application. The agent will automatically instrument your application to capture telemetry data.
Start the collector
Run:
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>
with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64
.
And run the application to generate traces.
View your traces
Allow some time for data ingestion, then open your Tracing account.
Configure OpenTelemetry with a Containerized Collector
Before you begin, you'll need:
- An active Logz.io account
Instrument the application
If your application isn't instrumented, begin by downloading the OpenTelemetry agent or library specific to your programming language. Logz.io supports popular open-source instrumentation libraries, including OpenTracing, Jaeger, OpenTelemetry, and Zipkin. Attach the agent, set up the necessary configuration options, and start your application. The agent will automatically instrument your application to capture telemetry data.
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.111.0
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Tail Sampling
tail_sampling
defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.
Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 |
sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Here is an example configuration file:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.111.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.111.0
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the application
When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.
Run the application to generate traces.
View your traces
Allow some time for data ingestion, then open your Tracing account.
Troubleshooting
If traces are not being sent despite instrumentation, follow these steps:
Collector not installed
The OpenTelemetry collector may not be installed on your system.
Suggested remedy
Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.
Collector path not configured
The collector may not have the correct endpoint configured for the receiver.
Suggested remedy
Verify the configuration file lists the following endpoints:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.
Traces not generated
The instrumentation code may be incorrect even if the collector and endpoints are properly configured.
Suggested remedy
- Check if the instrumentation can output traces to a console exporter.
- Use a web-hook to check if the traces are going to the output.
- Check the metrics endpoint
(http://<<COLLECTOR-HOST>>:8888/metrics)
to see spans received and sent. Replace<<COLLECTOR-HOST>>
with your collector's address.
If issues persist, refer to Logz.io's integrations hub and re-instrument the application.
Wrong exporter/protocol/endpoint
Incorrect exporter, protocol, or endpoint configuration.
The correct endpoints are:
receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"
Suggested remedy
- Activate
debug
logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"
Debug logs indicate the status code of the http/https post request.
If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.
A successful post request will log status code 200; failure reasons will also be logged.
Collector failure
The collector may fail to generate traces despite sending debug
logs.
Suggested remedy
On Linux and MacOS, view collector logs:
journalctl | grep otelcol
To only see errors:
journalctl | grep otelcol | grep Error
Otherwise, navigate to the following URL - http://localhost:8888/metrics
This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.
Exporter failure
The exporter configuration may be incorrect, causing trace export issues.
Suggested remedy
If you are unable to export traces to a destination, this may be caused by the following:
- There is a network configuration issue.
- The exporter configuration is incorrect.
- The destination is unavailable.
To investigate this issue:
- Make sure that the
exporters
andservice: pipelines
are configured correctly. - Check the collector logs and
zpages
for potential issues. - Check your network configuration, such as firewall, DNS, or proxy.
Metrics like the following can provide insights:
# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.
# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0
Receiver failure
The receiver may not be configured correctly.
Suggested remedy
If you are unable to receive data, this may be caused by the following:
- There is a network configuration issue.
- The receiver configuration is incorrect.
- The receiver is defined in the receivers section, but not enabled in any pipelines.
- The client configuration is incorrect.
Metrics for receivers can help diagnose issues:
# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.
# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34
# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.
# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0