App360
App360 is a high-level monitoring dashboard within Logz.io that enables you to monitor your operations. This integration allows you to configure the OpenTelemetry collector to send data from your OpenTelemetry installation to Logz.io using App360.
Architecture overview
This integration is based on OpenTelemetry and includes configuring the OpenTelemetry collector to receive data generated by your application instrumentation and send it to Logz.io using App360
On deployment, your OpenTelemetry instrumentation captures data from your application and forwards it to the collector, which exports the spans and metrics data to your Logz.io account using App360.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Set up your locally hosted OpenTelemetry
Before you begin, you'll need:
- An application instrumented with an OpenTelemetry instrumentation or any other supported instrumentations based on OpenTracing, Zipkin or Jaeger
- An active Logz.io account
- A Logz.io metrics account
Configure OpenTelemetry collector
Get the collector
You can either download the OpenTelemetry collector to your local host or run the collector as a Docker container.
Download locally
Create a dedicated directory on the host of your application and download the OpenTelemetry collector that is relevant to the operating system of your host.
Run as a Docker container
In the same Docker network as your application:
docker pull otel/opentelemetry-collector-contrib:0.105.0
This integration only works with a contrib image.
Create the config file
After setting up the collector, create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
connectors:
spanmetrics:
aggregation_temporality: AGGREGATION_TEMPORALITY_CUMULATIVE
dimensions:
- name: rpc.grpc.status_code
- name: http.method
- name: http.status_code
- name: cloud.provider
- name: cloud.region
- name: db.system
- name: messaging.system
- default: DEV
name: env_id
dimensions_cache_size: 100000
histogram:
explicit:
buckets:
- 2ms
- 8ms
- 50ms
- 100ms
- 200ms
- 500ms
- 1s
- 5s
- 10s
metrics_expiration: 5m
resource_metrics_key_attributes:
- service.name
- telemetry.sdk.language
- telemetry.sdk.name
exporters:
logzio/traces:
account_token: <<TRACING_SHIPPING_TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
prometheusremotewrite/spm:
endpoint: https://<<LISTENER>>:8053
add_metric_suffixes: false
headers:
Authorization: Bearer <<SPM_SHIPPING_TOKEN>>
processors:
batch:
tail_sampling:
policies:
- name: policy-errors
type: status_code
status_code: {status_codes: [ERROR]}
- name: policy-slow
type: latency
latency: {threshold_ms: 1000}
- name: policy-random-ok
type: probabilistic
probabilistic: {sampling_percentage: 10}
metricstransform/metrics-rename:
transforms:
- include: ^duration.*$$
match_type: regexp
action: update
new_name: latency.
- include: calls
action: update
new_name: calls_total
metricstransform/labels-rename:
transforms:
- include: ^latency
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
- include: ^calls
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces, spanmetrics]
metrics/spanmetrics:
receivers: [spanmetrics]
processors: [metricstransform/metrics-rename, metricstransform/labels-rename]
exporters: [prometheusremotewrite/spm]
telemetry:
logs:
level: "debug"
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Replace <<SPM-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account that is dedicated to your Service Performance Monitoring feature.
Replace <<LISTENER-HOST>>
with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
By default, this configuration collects all traces that have a span completed with an error, all traces that are slower than 1000 ms, and 10% of the remaining traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the span latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add to the configuration file of your existing OpenTelemetry collector the parameters described in the next steps.
Start the collector
Locally running collector
Run the following command:
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>
with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64
.
Containerized collector
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.105.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.105.0
Optional parameters
If required, you can add the following optional parameters as environment variables when running the container:
Parameter | Description |
---|---|
LATENCY_HISTOGRAM_BUCKETS | Comma separated list of durations defining the latency histogram buckets. Default: 2ms , 8ms , 50ms , 100ms , 200ms , 500ms , 1s , 5s , 10s |
SPAN_METRICS_DIMENSIONS | Each metric will have at least the following dimensions that are common across all spans: Service name , Operation , Span kind , Status code . The input is comma separated list of dimensions to add together with the default dimensions, for example: region,http.url . Each additional dimension is defined by a name from the span's collection of attributes or resource attributes. If the named attribute is missing in the span, this dimension will be omitted from the metric. |
SPAN_METRICS_DIMENSIONS_CACHE_SIZE | The maximum items number of metric_key_to_dimensions_cache . Default: 10000 . |
AGGREGATION_TEMPORALITY | Defines the aggregation temporality of the generated metrics. One of either cumulative or delta . Default: cumulative . |
Run the application
When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.
Run the application to generate traces.
Check Logz.io for your data
Give your data some time to get from your system to ours, and then open App360.
Kubernetes
Please note that running the Telemetry Collector for Kubernetes (k8s) is a separate use case, which is covered in detail in the specific section dedicated to the Telemetry Collector for Kubernetes.