Skip to main content

Service Performance Monitoring via App360

App360 is a high-level monitoring dashboard within Logz.io that enables you to monitor your tracing services and operations. This integration allows you to configure Service Performance Monitoring with OpenTelemetry collector and send spans and span metrics from your OpenTelemetry installation to Logz.io.

Log in to your Logz.io account and navigate to the current instructions page inside the Logz.io app. Install the pre-built dashboard to enhance the observability of your metrics.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Architecture overview

This integration is based on OpenTelemetry. It works as an add-on to existing OpenTelemetry installations. If you need to set up OpenTelemetry first, refer to our documentation on OpenTelemetry.

The integration includes:

  • Configuring the OpenTelemetry collector to receive spans generated by your application instrumentation and send the spans and span metrics to Logz.io

On deployment, your OpenTelemetry instrumentation captures spans from your application and forwards them to the collector, which exports the spans and span metrics data to your Logz.io account.

note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Set up your locally hosted OpenTelemetry installation to send spans and span metrics to Logz.io

Before you begin, you'll need:

  • An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
  • Service Performance Monitoring dashboard activated
  • An active account with Logz.io
  • A Logz.io span metrics account

Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]

exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.

Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10

If you already have an OpenTelemetry installation, add to the configuration file of your existing OpenTelemetry collector the parameters described in the next steps.

Add Logz.io exporter to your OpenTelemetry collector

Add the following parameters to the configuration file of your OpenTelemetry collector:

  • Under the receivers list:
  otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
  • Under the exporters list:
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
  • Under the processors list:
  tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
  • Under the service: pipelines list:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.

Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.

An example configuration file looks as follows:


receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]

exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"

Start the collector

Run the following command:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.

Run the application

Run the application to generate traces.

Check Logz.io for your metrics

Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.

Set up your OpenTelemetry installation using containerized collector to send spans and span metrics to Logz.io

Before you begin, you'll need:

  • An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
  • Service Performance Monitoring dashboard activated
  • An active account with Logz.io
  • A Logz.io span metrics account
note

The span metrics account name should include your tracing account name. For example, if your tracing account name is "tracing", your metrics account could be named "tracing-metrics".

Pull the Docker image for the OpenTelemetry collector

note

If you are already running a Logz.io Docker image logzio/otel-collector-traces, the new image logzio/otel-collector-spm will replace it.

In the same Docker network as your application:

docker pull otel/opentelemetry-collector-contrib:0.73.0
note

This integration only works with a contrib image.

Create a configuration file

Create a file config.yaml with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]

exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"


Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.

Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10

If you already have an OpenTelemetry installation, add to the configuration file of your existing OpenTelemetry collector the parameters described in the next steps.

Add Logz.io exporter to your OpenTelemetry collector

Add the following parameters to the configuration file of your OpenTelemetry collector:

  • Under the receivers list:
  otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]
  • Under the exporters list:
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
  • Under the processors list:
  tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code
  • Under the service: pipelines list:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.

Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.

An example configuration file looks as follows:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
otlp/spanmetrics:
protocols:
grpc:
endpoint: :12345
prometheus:
config:
global:
external_labels:
p8s_logzio_name: spm-otel
scrape_configs:
- job_name: 'spm'
scrape_interval: 15s
static_configs:
- targets: [ "0.0.0.0:8889" ]

exporters:
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<SPM-METRICS-SHIPPING-TOKEN>>
prometheus:
endpoint: "localhost:8889"
logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets: [2ms, 6ms, 10ms, 100ms, 250ms, 500ms, 1000ms, 10000ms, 100000ms, 1000000ms]
# Additional list of dimensions on top of:
# - service.name
# - operation
# - span.kind
# - status.code
dimensions:
# If the span is missing http.method, the processor will insert
# the http.method dimension with value 'GET'.
# For example, in the following scenario, http.method is not present in a span and so will be added as a dimension to the metric with value "GET":
# - promexample_calls{http_method="GET",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.method
default: GET
# If a default is not provided, the http.status_code dimension will be omitted
# if the span does not contain http.status_code.
# For example, consider a scenario with two spans, one span having http.status_code=200 and another missing http.status_code. Two metrics would result with this configuration, one with the http_status_code omitted and the other included:
# - promexample_calls{http_status_code="200",operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
# - promexample_calls{operation="/Address",service_name="shippingservice",span_kind="SPAN_KIND_SERVER",status_code="STATUS_CODE_UNSET"} 1
- name: http.status_code

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics,tail_sampling,batch]
exporters: [logzio/traces]
metrics/spanmetrics:
# This receiver is just a dummy and never used.
# Added to pass validation requiring at least one receiver in a pipeline.
receivers: [otlp/spanmetrics]
exporters: [prometheus]
metrics:
receivers: [prometheus]
exporters: [logging,prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"

Run the container

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux

docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.73.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows

docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.73.0

Optional parameters

If required, you can add the following optional parameters as environment variables when running the container:

ParameterDescription
LATENCY_HISTOGRAM_BUCKETSComma separated list of durations defining the latency histogram buckets. Default: 2ms, 8ms, 50ms, 100ms, 200ms, 500ms, 1s, 5s, 10s
SPAN_METRICS_DIMENSIONSEach metric will have at least the following dimensions that are common across all spans: Service name, Operation, Span kind, Status code. The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url. Each additional dimension is defined by a name from the span's collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric.
SPAN_METRICS_DIMENSIONS_CACHE_SIZEThe maximum items number of metric_key_to_dimensions_cache. Default: 10000.
AGGREGATION_TEMPORALITYDefines the aggregation temporality of the generated metrics. One of either cumulative or delta. Default: cumulative.

Run the application

note

Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.

Run the application to generate traces.

Check Logz.io for your metrics

Give your metrics some time to get from your system to ours, and then open Tracing. Navigate to the Monitor tab to view the span metrics.

Kubernetes

Overview

You can use a Helm chart to ship metrics and span metrics from your OpenTelemetry installation to Logz.io. The Helm tool is used to manage packages of preconfigured Kubernetes resources that use charts.

This Helm chart monitors the following metrics:

  • latency_bucket
  • latency_sum
  • lateny_count
  • calls_total

Before you begin, you'll need:

  • An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
  • Service Performance Monitoring dashboard activated
  • An active account with Logz.io
Deploy the Helm chart

Add logzio-helm repo as follows:

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Default configuration (except AWS Fargate)
helm install  -n monitoring --create-namespace \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.metrics.enabled=true \
--set logzio-k8s-telemetry.secrets.MetricsToken="<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.ListenerHost="https://<<LISTENER-HOST>>:8053" \
--set logzio-k8s-telemetry.secrets.p8s_logzio_name="<<ENV-ID>>" \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="<<TRACING-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO-REGION>>" \
--set logzio-k8s-telemetry.spm.enabled=true \
--set logzio-k8s-telemetry.secrets.env_id="<<ENV-ID>>" \
--set logzio-k8s-telemetry.secrets.SpmToken="<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.serviceGraph.enabled=true \
--set deployEvents.enabled=true \
--set logzio-k8s-events.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-k8s-events.secrets.logzioListener="<<LISTENER-HOST>>" \
--set logzio-k8s-events.secrets.env_id="<<ENV-ID>>" \
logzio-monitoring logzio-helm/logzio-monitoring
ParameterDescription
<<LOG-SHIPPING-TOKEN>>Your logs shipping token.
<<LISTENER-HOST>>Your account's listener host.
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>Your metrics shipping token.
<<P8S-LOGZIO-NAME>>The name for the environment's metrics, to easily identify the metrics for each environment.
<<ENV-ID>>The name for your environment's identifier, to easily identify the telemetry data for each environment.
<<TRACES-SHIPPING-TOKEN>>Your traces shipping token.
<<SPM-SHIPPING-TOKEN>>Your span metrics shipping token.
<<LOGZIO-REGION>>Name of your Logz.io traces region e.g us, eu...
AWS Fargate configuration

To ship logs from pods running on Fargate, set the fargateLogRouter.enabled value to true. Doing so will deploy a dedicated aws-observability namespace and a configmap for the Fargate log router. For more information on EKS Fargate logging, please refer to the official AWS documentation.

helm install -n monitoring \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.metrics.enabled=true \
--set logzio-k8s-telemetry.secrets.MetricsToken="<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.ListenerHost="https://<<LISTENER-HOST>>:8053" \
--set logzio-k8s-telemetry.secrets.p8s_logzio_name="<<CLUSTER-NAME>>" \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="<<TRACING-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="<<LOGZIO_ACCOUNT_REGION_CODE>>" \
--set logzio-k8s-telemetry.spm.enabled=true \
--set logzio-k8s-telemetry.secrets.env_id="<<ENV-ID>>" \
--set logzio-k8s-telemetry.secrets.SpmToken="<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.serviceGraph.enabled=true \
--set deployEvents.enabled=true \
--set logzio-k8s-events.secrets.logzioShippingToken="<<LOG-SHIPPING-TOKEN>>" \
--set logzio-k8s-events.secrets.logzioListener="<<LISTENER-HOST>>" \
--set logzio-k8s-events.secrets.env_id="<<ENV-ID>>" \
logzio-monitoring logzio-helm/logzio-monitoring
ParameterDescription
<<LOG-SHIPPING-TOKEN>>Your logs shipping token.
<<LISTENER-HOST>>Your account's listener host.
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>Your metrics shipping token.
<<P8S-LOGZIO-NAME>>The name for the environment's metrics, to easily identify the metrics for each environment.
<<ENV-ID>>The name for your environment's identifier, to easily identify the telemetry data for each environment.
<<TRACES-SHIPPING-TOKEN>>Your traces shipping token.
<<SPM-SHIPPING-TOKEN>>Your span metrics shipping token.
<<LOGZIO-REGION>>Name of your Logz.io traces region e.g us, eu...
Check Logz.io for your traces

Give your traces some time to get from your system to ours, then open Logz.io.

Customizing Helm chart parameters

Configure customization options

You can use the following options to update the Helm chart parameters:

  • Specify parameters using the --set key=value[,key=value] argument to helm install.

  • Edit the values.yaml.

  • Overide default values with your own my_values.yaml and apply it in the helm install command.

If required, you can add the following optional parameters as environment variables:

ParameterDescription
config.processors.spanmetrics.latency_histogram_bucketsComma separated list of durations defining the latency histogram buckets. Default: 2ms, 8ms, 50ms, 100ms, 200ms, 500ms, 1s, 5s, 10s
config.processors.spanmetrics.dimensionsEach metric will have at least the following dimensions that are common across all spans: Service name, Operation, Span kind, Status code. The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url. Each additional dimension is defined by a name from the span's collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric.
config.processors.spanmetrics.dimensions_cache_sizeThe maximum items number of metric_key_to_dimensions_cache. Default: 10000.
config.processors.spanmetrics.aggregation_temporalityDefines the aggregation temporality of the generated metrics. One of either cumulative or delta. Default: cumulative.
secrets.SamplingLatencyThreshold for the spand latency - all traces slower than the threshold value will be filtered in. Default 500.
secrets.SamplingProbabilitySampling percentage for the probabilistic policy. Default 10.
Example

You can run the chart with your custom configuration file that takes precedence over the values.yaml of the chart.

For example:

note

The collector will sample ALL traces where is some span with error with this example configuration.

baseCollectorConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]
helm install -f <PATH-TO>/my_values.yaml \
--set logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio.tracing_token=<<TRACING-SHIPPING-TOKEN>> \
--set logzio.metrics_token=<<SPM-METRICS-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set spm.enabled=true \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry

Replace <PATH-TO> with the path to your custom values.yaml file.

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Replace <<SPM-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.

Uninstalling the Chart

The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.

To uninstall the logzio-k8s-telemetry deployment, use the following command:

helm uninstall logzio-k8s-telemetry