Skip to main content

Python

Logs

The Logz.io Python Handler sends logs in bulk over HTTPS to Logz.io, grouping them based on size. If the main thread quits, the handler attempts to send any remaining logs before exiting. If unsuccessful, the logs are saved to the local file system for later retrieval.

Setup Logz.io Python Handler

Supported versions: Python 3.5 or newer.

Install dependency

Navigate to your project's folder and run:

pip install logzio-python-handler

For Trace context, install the OpenTelemetry logging instrumentation dependency by running:

pip install logzio-python-handler[opentelemetry-logging]

Configure Python Handler for a standard project

Replace placeholders with your details. You must configure these parameters by this exact order. i.e. you cannot set Debug to true, without configuring all of the previous parameters as well.

ParameterDescriptionRequired/Default
<< LOG-SHIPPING-TOKEN >>Your Logz.io account log shipping token.Required
<< LOG-TYPE >>Log type, for searching in logz.io.python
<<TIMEOUT>>Time to sleep between draining attempts3
<< LISTENER-HOST >>Logz.io listener host, as described here.https://listener.logz.io:8071
<<DEBUG-FLAG>>Debug flag. If set to True, will print debug messages to stdout.false
<<BACKUP-LOGS>>If set to False, disables the local backup of logs in case of failure.true
<<NETWORK-TIMEOUT>>Network timeout, in seconds, int or float, for sending the logs to logz.io.10
<<RETRY-LIMIT>>Retries number4
<<RETRY-TIMEOUT>>Retry timeout (retry_timeout) in seconds2
[handlers]
keys=LogzioHandler

[handler_LogzioHandler]
class=logzio.handler.LogzioHandler
formatter=logzioFormat

args=('<<LOG-SHIPPING-TOKEN>>', '<<LOG-TYPE>>', <<TIMEOUT>>, 'https://<<LISTENER-HOST>>:8071', <<DEBUG-FLAG>>,<<NETWORKING-TIMEOUT>>,<<RETRY-LIMIT>>,<<RETRY-TIMEOUT>>)

[formatters]
keys=logzioFormat

[loggers]
keys=root

[logger_root]
handlers=LogzioHandler
level=INFO

[formatter_logzioFormat]
format={"additional_field": "value"}

Dictionary configuration:

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'logzioFormat': {
'format': '{"additional_field": "value"}',
'validate': False
}
},
'handlers': {
'logzio': {
'class': 'logzio.handler.LogzioHandler',
'level': 'INFO',
'formatter': 'logzioFormat',
'token': '<<LOG-SHIPPING-TOKEN>>',
'logzio_type': '<<LOG-TYPE>>',
'logs_drain_timeout': 5,
'url': 'https://<<LISTENER-HOST>>:8071'
'retries_no': 4,
'retry_timeout': 2,
}
},
'loggers': {
'': {
'level': 'DEBUG',
'handlers': ['logzio'],
'propagate': True
}
}
}

Django configuration

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'logzioFormat': {
'format': '{"additional_field": "value"}',
'validate': False
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'verbose'
},
'logzio': {
'class': 'logzio.handler.LogzioHandler',
'level': 'INFO',
'formatter': 'logzioFormat',
'token': '<<LOG-SHIPPING-TOKEN>>',
'logzio_type': "django",
'logs_drain_timeout': 5,
'url': 'https://<<LISTENER-HOST>>:8071'
'debug': True,
'network_timeout': 10,
},
},
'loggers': {
'django': {
'handlers': ['console', ],
'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO')
},
'': {
'handlers': ['console', 'logzio'],
'level': 'INFO'
}
}
}

Serverless platforms

When using a serverless function, import and add LogzioFlusher annotation before your sender function. In the code example below umcomment import and the @LogzioFlusher(logger) annotation line. Next, ensure the Logz.io handler is added to the root logger.

Be sure to replace superAwesomeLogzioLoggers with the name of your logger.

'loggers': {
'superAwesomeLogzioLogger': {
'level': 'DEBUG',
'handlers': ['logzio'],
'propagate': True
}
}

For example:

import logging
import logging.config
# from logzio.flusher import LogzioFlusher
# from logzio.handler import ExtraFieldsLogFilter

# Say I have saved my configuration as a dictionary in a variable named 'LOGGING' - see 'Dict Config' sample section
logging.config.dictConfig(LOGGING)
logger = logging.getLogger('superAwesomeLogzioLogger')

# @LogzioFlusher(logger)
def my_func():
logger.info('Test log')
logger.warning('Warning')

try:
1/0
except:
logger.exception("Supporting exceptions too!")

Dynamic extra fields

You can dynamically add extra fields to your logs without predefining them in the configuration. This allows each log to have unique extra fields.


logger.info("Test log")

extra_fields = {"foo":"bar","counter":1}
logger.addFilter(ExtraFieldsLogFilter(extra_fields))
logger.warning("Warning test log")

error_fields = {"err_msg":"Failed to run due to exception.","status_code":500}
logger.addFilter(ExtraFieldsLogFilter(error_fields))
logger.error("Error test log")

# If you'd like to remove filters from future logs using the logger.removeFilter option:
logger.removeFilter(ExtraFieldsLogFilter(error_fields))
logger.debug("Debug test log")

To add dynamic metadata to a specific log rather than to the logger, use the "extra" parameter. All key-value pairs in the dictionary passed to "extra" will appear as new fields in Logz.io. Note that you cannot override default fields set by the Python logger (e.g., lineno, thread).

For example:

logger.info('Warning', extra={'extra_key':'extra_value'})

Trace context

You can correlate your logs with the trace context by installing the OpenTelemetry logging instrumentation dependency:

pip install logzio-python-handler[opentelemetry-logging]

Enable this feature by setting add_context parameter to True in your handler configuration:

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'logzioFormat': {
'format': '{"additional_field": "value"}',
'validate': False
}
},
'handlers': {
'logzio': {
'class': 'logzio.handler.LogzioHandler',
'level': 'INFO',
'formatter': 'logzioFormat',
'token': '<<LOG-SHIPPING-TOKEN>>',
'logzio_type': '<<LOG-TYPE>>',
'logs_drain_timeout': 5,
'url': 'https://<<LISTENER-HOST>>:8071'
'retries_no': 4,
'retry_timeout': 2,
'add_context': True
}
},
'loggers': {
'': {
'level': 'DEBUG',
'handlers': ['logzio'],
'propagate': True
}
}
}

Truncating logs

To create a Python logging filter that truncates log messages to a specific number of characters before processing, use the following code:

class TruncationLoggerFilter(logging.Filter):
def __init__(self):
super(TruncationLoggerFilter, self).__init__()

def filter(self, record):
record.msg = record.msg[:32700]
print(record.msg)
return True

logger = logging.getLogger("logzio")
logger.addFilter(TruncationLoggerFilter())

The default limit is 32,700, but you can adjust this value as required.

Metrics

Send custom metrics to Logz.io from your Python application. This example uses OpenTelemetry Python SDK and the OpenTelemetry remote write exporter.

Code configuration setup

1. Install the snappy c-library

DEB: sudo apt-get install libsnappy-dev

RPM: sudo yum install libsnappy-devel

OSX/Brew: brew install snappy

Windows: pip install python_snappy-0.5-cp36-cp36m-win_amd64.whl

2. Install the exporter and opentelemtry sdk

pip install opentelemetry-exporter-prometheus-remote-write 

3. Add instruments to your application

Replace the placeholders in the exporter section to match your specifics.

ParameterDescription
LISTENER-HOSTThe Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. and add http/https protocol (https://listener.logz.io:8053).
PROMETHEUS-METRICS-SHIPPING-TOKENYour Logz.io Prometheus Metrics account token. Replace <<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account you want to ship to. Look up your Metrics token.
from time import sleep
from typing import Iterable

from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter,
)
from opentelemetry.metrics import (
CallbackOptions,
Observation,
get_meter_provider,
set_meter_provider,
)
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

# configure the Logz.io listener endpoint and Prometheus metrics account token
exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="https://<<LISTENER-HOST>>:8053",
headers={
"Authorization": "Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>",
},
)

reader = PeriodicExportingMetricReader(exporter)
provider = MeterProvider(metric_readers=[reader])
set_meter_provider(provider)


def observable_counter_func(options: CallbackOptions) -> Iterable[Observation]:
yield Observation(1, {})


def observable_up_down_counter_func(
options: CallbackOptions,
) -> Iterable[Observation]:
yield Observation(-10, {})


def observable_gauge_func(options: CallbackOptions) -> Iterable[Observation]:
yield Observation(9, {})


meter = get_meter_provider().get_meter("getting-started", "0.1.2")

# Counter
counter = meter.create_counter("counter")
counter.add(1)

# Async Counter
observable_counter = meter.create_observable_counter(
"observable_counter",
[observable_counter_func],
)

# UpDownCounter
updown_counter = meter.create_up_down_counter("updown_counter")
updown_counter.add(1)
updown_counter.add(-5)

# Async UpDownCounter
observable_updown_counter = meter.create_observable_up_down_counter(
"observable_updown_counter", [observable_up_down_counter_func]
)

# Histogram
histogram = meter.create_histogram("histogram")
histogram.record(99.9)

# Async Gauge
gauge = meter.create_observable_gauge("gauge", [observable_gauge_func])
sleep(6)

Types of metric instruments

See OpenTelemetry documentation for more details.

NameBehaviorDefault aggregation
CounterMetric value can only go up or be reset to 0, calculated per counter.add(value,labels) request.Sum
UpDownCounterMetric value can arbitrarily increment or decrement, calculated per updowncounter.add(value,labels) request.Sum
ValueRecorderMetric values captured by the valuerecorder.record(value) function, calculated per request.TBD
SumObserverMetric value can only go up or be reset to 0, calculated per push interval.Sum
UpDownSumObserverMetric value can arbitrarily increment or decrement, calculated per push interval.Sum
ValueObserverMetric values captured by the valuerecorder.observe(value) function, calculated per push interval.LastValue
Counter
# create a counter instrument
counter = meter.create_counter(
name="MyCounter",
description="Description of MyCounter",
unit="1",
value_type=int
)
# add labels
labels = {
"dimension": "value"
}
# provide the first data point
counter.add(25, labels)
UpDownCounter
# create an updowncounter instrument
requests_active = meter.create_updowncounter(
name="requests_active",
description="number of active requests",
unit="1",
value_type=int,
)
# add labels
labels = {
"dimension": "value"
}
# provide the first data point
requests_active.add(-2, labels)
ValueRecorder
# create a valuerecorder instrument
requests_size = meter.create_valuerecorder(
name="requests_size",
description="size of requests",
unit="1",
value_type=int,
)
# add labels
labels = {
"dimension": "value"
}
# provide the first data point
requests_size.record(85, labels)
SumObserver
import psutil
# Callback to gather RAM usage
def get_ram_usage_callback(observer):
ram_percent = psutil.virtual_memory().percent
# add labels
labels = {
"dimension": "value"
}
observer.observe(ram_percent, labels)
# create a sumobserver instrument
meter.register_sumobserver(
callback=get_ram_usage_callback,
name="ram_usage",
description="ram usage",
unit="1",
value_type=float,
)
UpDownSumObserver
# Callback to gather RAM usage
def get_ram_usage_callback(observer):
ram_percent = psutil.virtual_memory().percent
# add labels
labels = {
"dimension": "value"
}
observer.observe(ram_percent, labels)
# create an updownsumobserver instrument
meter.register_updownsumobserver(
callback=get_ram_usage_callback,
name="ram_usage",
description="ram usage",
unit="1",
value_type=float,
)
ValueObserver
import psutil
def get_cpu_usage_callback(observer):
for (number, percent) in enumerate(psutil.cpu_percent(percpu=True)):
labels = {"cpu_number": str(number)}
observer.observe(percent, labels)
# create a valueobserver instrument
meter.register_valueobserver(
callback=get_cpu_usage_callback,
name="cpu_percent",
description="per-cpu usage",
unit="1",
value_type=float,
)

5. Check Logz.io for your metrics

Allow some time for your data to transfer. Then log in to your Logz.io Metrics account and open the Metrics dashboard.

Traces

Deploy this integration to enable automatic instrumentation of your Python application using OpenTelemetry.

Architecture overview

This integration includes:

  • Installing the OpenTelemetry Python instrumentation packages on your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Running your Python application in conjunction with the OpenTelemetry instrumentation

On deployment, the Python instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Local host Python application auto instrumentation

Requirements:

  • A Python application without instrumentation
  • An active Logz.io account
  • Port 4317 available on your host system
  • A name defined for your tracing service
note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Install OpenTelemetry components for Python

pip3 install opentelemetry-distro
pip3 install opentelemetry-instrumentation
opentelemetry-bootstrap --action=install
pip3 install opentelemetry-exporter-otlp

Set environment variables

After installation, configure the exporter with this command:

export OTEL_TRACES_EXPORTER=otlp
export OTEL_RESOURCE_ATTRIBUTES="service.name=<<YOUR-SERVICE-NAME>>"

Download and configure OpenTelemetry collector

Create a directory on your Python application and download the relevant OpenTelemetry collector. Create a config.yaml with the following parameters:

  • Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

Start the collector

Run:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the collector's directory.
  • Replace <VERSION-NAME> with the version name, e.g. otelcontribcol_darwin_amd64.

Run OpenTelemetry with your Python application

Run this code from the directory of your Python application script:

opentelemetry-instrument python3 <YOUR-APPLICATION-SCRIPT>.py

Replace <YOUR-APPLICATION-SCRIPT> with your Python application script name.

Viewing Traces in Logz.io

Give your traces time to process, after which they'll be available in your Tracing dashboard.

Docker Python application auto instrumentation

Auto-instrument your Python application and run a containerized OpenTelemetry collector to send traces to Logz.io. Ensure both application and collector containers share the same network.

Requirements:

  • A Python application without instrumentation
  • An active Logz.io account
  • Port 4317 available on your host system
  • A name defined for your tracing service

Install OpenTelemetry instrumentation components

pip3 install opentelemetry-distro
pip3 install opentelemetry-instrumentation
opentelemetry-bootstrap --action=install
pip3 install opentelemetry-exporter-otlp

Set environment variables

Configure the exporter by running:

export OTEL_TRACES_EXPORTER=otlp
export OTEL_RESOURCE_ATTRIBUTES="service.name=<<YOUR-SERVICE-NAME>>"

Replace <<YOUR-SERVICE-NAME>> with your tracing service name.

Pull OpenTelemetry collector docker image

docker pull otel/opentelemetry-collector-contrib:0.111.0

Create a configuration file

Create a config.yaml file with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"


exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

If you already have an OpenTelemetry installation, add these parameters to your existing collector's configuration file:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

An example configuration file:

receivers:  
otlp:
protocols:
grpc:
http:

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

Run the container

Mount config.yaml as volume to the docker run command and run it as follows.

Linux
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.111.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.111.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Run the OpenTelemetry instrumentation in conjunction with your Python application

note

When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.

Run this code from your Python application script directory:

opentelemetry-instrument python3 `<<YOUR-APPLICATION-SCRIPT>>`.py

Replace <<YOUR-APPLICATION-SCRIPT>> with your Python application script name.

Viewing Traces in Logz.io

Give your traces time to process, after which they'll be available in your Tracing dashboard.

Kuberenetes Python application auto insturmentation

Use a Helm chart to ship traces to Logz.io via the OpenTelemetry collector. The Helm tool manages packages of preconfigured Kubernetes resources.

logzio-k8s-telemetry allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.

note

This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts is logzio-helm.

caution

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Standard configuration

1. Deploy the Helm chart

Add logzio-helm repo as follows:

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update

2. Run the Helm deployment code

helm install  \
--set logzio-k8s-telemetry.secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set logzio-k8s-telemetry.secrets.TracesToken=<<TRACING-SHIPPING-TOKEN>> \
logzio-monitoring logzio-helm/logzio-monitoring -n monitoring

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

<<LOGZIO_ACCOUNT_REGION_CODE>> - Your Logz.io account region code. Available regions.

3. Define the logzio-k8s-telemetry service DNS

Typically, the service name will be logzio-k8s-telemetry.default.svc.cluster.local, where default is the namespace where you deployed the helm chart and svc.cluster.name is your cluster domain name. If you're unsude what your cluster domain name is, run the following command to find it:

kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'

This command deploys a pod to extract your cluster domain name, which can be removed after.

4. Install general Python OpenTelemetry instrumentation components

pip3 install opentelemetry-distro
pip3 install opentelemetry-instrumentation
opentelemetry-bootstrap --action=install
pip3 install opentelemetry-exporter-otlp

5. Set environment variables

Configure the exporter by running the following command:

export OTEL_TRACES_EXPORTER=otlp
export OTEL_RESOURCE_ATTRIBUTES="service.name=<<YOUR-SERVICE-NAME>>"

Replace <<YOUR-SERVICE-NAME>> with your tracing service name.

6. Viewing Traces in Logz.io

Give your traces time to process, after which they'll be available in your Tracing dashboard.

Customizing Helm chart parameters

You can update Helm chart parameters in three ways:

  • Specify parameters using the --set key=value[,key=value] argument to helm install.

  • Edit the values.yaml.

  • Override default values with your own my_values.yaml and apply it in the helm install command.

Optional parameters can be added as environment variables:

ParameterDescriptionDefault
secrets.SamplingLatencyThreshold for the span latency - all traces slower than the threshold value will be filtered in.500
secrets.SamplingProbabilitySampling percentage for the probabilistic policy.10
Example

You can run the logzio-k8s-telemetry chart with a custom configuration file that takes precedence over the values.yaml of the chart by running the following:

helm install -f <PATH-TO>/my_values.yaml \
--set logzio-k8s-telemetry.secrets.TracesToken=<<TRACES-SHIPPING-TOKEN>> \
--set logzio-k8s-telemetry.secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set metricsOrTraces=true \
logzio-monitoring logzio-helm/logzio-monitoring

Replace <PATH-TO> with your custom values.yaml file path.

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

note

The collector will sample ALL traces that contain any span with an error with this example configuration.

baseCollectorConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]

Uninstalling the Chart

To uninstall the logzio-monitoring deployment, run:

helm uninstall logzio-monitoring

Troubleshooting