Skip to main content

.NET

Logs​

Before you begin, you'll need:

  • log4net 2.0.8 or higher
  • .NET Core SDK version 2.0 or higher
  • .NET Framework version 4.6.1 or higher

Add the dependency to your project​

If you're on Windows, navigate to your project's folder in the command line, and run this command to install the dependency.

Install-Package Logzio.DotNet.Log4net

If you're on a Mac or Linux machine, you can install the package using Visual Studio. Select Project > Add NuGet Packages..., and then search for Logzio.DotNet.Log4net.

Configure the appender​

You can configure the appender in a configuration file or directly in the code. Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See log4net documentation πŸ”— to learn more about configuration options.

For a complete list of options, see the configuration parameters below the code blocks.πŸ‘‡

Option 1: In a configuration file​
<log4net>
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<!--
Required fields
-->
<!-- Your Logz.io log shipping token -->
<token><<LOG-SHIPPING-TOKEN>></token>

<!--
Optional fields (with their default values)
-->
<!-- The type field will be added to each log message, making it
easier for you to differ between different types of logs. -->
<type>log4net</type>
<!-- The URL of the Logz.io listener -->
<listenerUrl>https://<<LISTENER-HOST>>:8071</listenerUrl>
<!--Optional proxy server address:
proxyAddress = "http://your.proxy.com:port" -->
<!-- The maximum number of log lines to send in each bulk -->
<bufferSize>100</bufferSize>
<!-- The maximum time to wait for more log lines, in a hh:mm:ss.fff format -->
<bufferTimeout>00:00:05</bufferTimeout>
<!-- If connection to Logz.io API fails, how many times to retry -->
<retriesMaxAttempts>3</retriesMaxAttempts>
<!-- Time to wait between retries, in a hh:mm:ss.fff format -->
<retriesInterval>00:00:02</retriesInterval>
<!-- Set the appender to compress the message before sending it -->
<gzip>true</gzip>
<!-- Uncomment this to enable sending logs in Json format -->
<!--<parseJsonMessage>true</parseJsonMessage>-->
<!-- Enable the appender's internal debug logger (sent to the console output and trace log) -->
<debug>false</debug>
<!-- If you have custom fields keys that start with capital letter and want to see the fields
with capital letter in Logz.io, set this field to true. The default is false
(first letter will be small letter). -->
<jsonKeysCamelCase>false</jsonKeysCamelCase>
<!-- Add trace context (traceId and spanId) to each log. The default is false -->
<addTraceContext>false</addTraceContext>
<!-- Use the same static HTTP/s client for sending logs. The default is false -->
<useStaticHttpClient>false</useStaticHttpClient>

</appender>

<root>
<level value="INFO" />
<appender-ref ref="LogzioAppender" />
</root>
</log4net>

Add a reference to the configuration file in your code, as shown in the example here.

Code sample​
using System.IO;
using log4net;
using log4net.Config;
using System.Reflection;

namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var logger = LogManager.GetLogger(typeof(Program));
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());

// Replace "App.config" with the config file that holds your log4net configuration
XmlConfigurator.Configure(logRepository, new FileInfo("App.config"));

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");

LogManager.Shutdown();
}
}
}
Option 2: In the code​
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("<<LISTENER-HOST>>");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false);
// logzioAppender.AddTraceContext(false);
// logzioAppender.UseStaticHttpClient(false);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Root.Level = Level.All;
hierarchy.Configured = true;
Code sample​
using log4net;
using log4net.Core;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;

namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logger = LogManager.GetLogger(typeof(Program));
var logzioAppender = new LogzioAppender();

logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false)
// logzioAppender.AddTraceContext(false);
// logzioAppender.UseStaticHttpClient(false);
logzioAppender.ActivateOptions();

hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
hierarchy.Root.Level = Level.All;

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");

LogManager.Shutdown();
}
}
}
Parameters​
ParameterDescriptionDefault/Required
tokenYour Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.Required
listenerUrlListener URL and port. Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.https://listener.logz.io:8071
typeThe log type, shipped as type field. Used by Logz.io for consistent parsing. Can't contain spaces.log4net
bufferSizeMaximum number of messages the logger will accumulate before sending them all as a bulk.100
bufferTimeoutMaximum time to wait for more log lines, as hh:mm:ss.fff.00:00:05
retriesMaxAttemptsMaximum number of attempts to connect to Logz.io.3
retriesIntervalTime to wait between retries, as hh:mm:ss.fff.00:00:02
gzipTo compress the data before shipping, true. Otherwise, false.false
debugTo print debug messages to the console and trace log, true. Otherwise, false.false
parseJsonMessageTo parse your message as JSON format, add this field and set it to true.false
proxyAddressProxy address to route your logs through.None
jsonKeysCamelCaseIf you have custom fields keys that start with a capital letter and want to see the fields with a capital letter in Logz.io, set this field to true.false
addTraceContextIf want to add trace context to each log, set this field to true.false
useStaticHttpClientIf want to use the same static HTTP/s client for sending logs, set this field to true.false
Custom fields​

You can add static keys and values to be added to all log messages. These custom fields must be children of <appender>, as shown here.

<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<customField>
<key>Environment</key>
<value>Production</value>
</customField>
<customField>
<key>Location</key>
<value>New Jerseay B1</value>
</customField>
</appender>
Extending the appender​

To change or add fields to your logs, inherit the appender and override the ExtendValues method.

public class MyAppLogzioAppender : LogzioAppender
{
protected override void ExtendValues(LoggingEvent loggingEvent, Dictionary<string, object> values)
{
values["logger"] = "MyPrefix." + values["logger"];
values["myAppClientId"] = new ClientIdProvider().Get();
}
}

Change your configuration to use your new appender name. For the example above, you'd use MyAppLogzioAppender.

Add trace context​
note

The Trace Context feature does not support .NET Standard 1.3.

If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: span id and trace id. To enable this feature, set <addTraceContext>true</addTraceContext> in your configuration file or logzioAppender.AddTraceContext(true); in your code. For example:

using log4net;
using log4net.Core;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;

namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logger = LogManager.GetLogger(typeof(Program));
var logzioAppender = new LogzioAppender();

logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false)
logzioAppender.AddTraceContext(true);
logzioAppender.ActivateOptions();

hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
hierarchy.Root.Level = Level.All;

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");

LogManager.Shutdown();
}
}
}
Serverless platforms​

If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the UseStaticHttpClient flag set to true.

Azure serverless function code sample​

Startup.cs

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using log4net;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;

[assembly: FunctionsStartup(typeof(LogzioLog4NetSampleApplication.Startup))]

namespace LogzioLog4NetSampleApplication
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
logzioAppender.ActivateOptions();
logzioAppender.UseStaticHttpClient(true);
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
}
}
}

FunctionApp.cs

using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using log4net;
using MicrosoftLogger = Microsoft.Extensions.Logging.ILogger;

namespace LogzioLog4NetSampleApplication
{
public class TimerTriggerCSharpLog4Net
{

private static readonly ILog logger = LogManager.GetLogger(typeof(TimerTriggerCSharpLog4Net));

[FunctionName("TimerTriggerCSharpLog4Net")]
public void Run([TimerTrigger("*/30 * * * * *")]TimerInfo myTimer, MicrosoftLogger msLog)
{
msLog.LogInformation($"Log4Net C# Timer trigger function executed at: {DateTime.Now}");

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Flush(5000);

msLog.LogInformation($"Log4Net C# Timer trigger function finished at: {DateTime.Now}");
}
}
}

Metrics​

Helm is a tool for managing packages of pre-configured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application.

Sending metrics from nodes with taints​

If you want to ship metrics from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows:

tolerations:
- key:
operator:
value:
effect:

To determine if a node uses taints as well as to display the taint keys, run:

kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"

:::node You need to use Helm client with version v3.9.0 or above. :::

Standard configuration​

Select the namespace​

This integration will be deployed in the namespace you set in values.yaml. The default namespace for this integration is logzio-dotnet-monitor.

To select a different namespace, run:

kubectl create namespace <<NAMESPACE>>
  • Replace <<NAMESPACE>> with the name of your namespace.
Add logzio-helm repo​
helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code​
helm install -n <<NAMESPACE>> \
--set secrets.logzioURL='<<LISTENER-HOST>>:8053' \
--set secrets.logzioToken='<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>' \
--set-file dotnetAppContainers='<<DOTNET_APP_CONTAINERS_FILE>>' \
logzio-dotnet-monitor logzio-helm/logzio-dotnet-monitor
  • Replace <<NAMESPACE>> with the namespace you selected for this integration. The default value is default. Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.
  • Replace <<DOTNET_APP_CONTAINERS_FILE>> with your .NET application containers file. Make sure your main .NET application container has the following volumeMount:
volumeMounts:
- mountPath: /tmp
name: diagnostics
Check Logz.io for your metrics​

Give your metrics some time to get from your system to ours, then open Logz.io. You can search for your metrics in Logz.io by searching {job="dotnet-monitor-collector"}

Customizing Helm chart parameters​

Configure customization options​

You can use the following options to update the Helm chart parameters:

  • Specify parameters using the --set key=value[,key=value] argument to helm install or --set-file key=value[,key=value]

  • Edit the values.yaml

  • Overide default values with your own my_values.yaml and apply it in the helm install command.

Customization parameters​
ParameterDescriptionDefault
nameOverrideOverrides the Chart name for resources.""
fullnameOverrideOverrides the full name of the resources.""
apiVersions.deploymentDeployment API version.apps/v1
apiVersions.configmapConfigmap API version.v1
apiVersions.secretSecret API version.v1
namespaceChart's namespace.logzio-dotnet-monitor
replicaCountThe number of replicated pods, the deployment creates.1
labelsPod's labels.{}
annotationsPod's annotations.{}
customSpecsCustom spec fields to add to the deployment.{}
dotnetAppContainersList of your .NET application containers to add to the pod.[]
logzioDotnetMonitor.nameThe name of the container that collects and ships diagnostic metrics of your .NET application to Logz.io (sidecar)logzio-dotnet-monitor
logzioDotnetMonitor.image.nameName of the image that is going to run in logzioDotnetMonitor.name containerlogzio/logzio-dotnet-monitor
logzioDotnetMonitor.image.tagThe tag of the image that is going to run in logzioDotnetMonitor.name containerlatest
logzioDotnetMonitor.portsList of ports the logzioDotnetMonitor.name container exposes52325
tolerationsList of tolerations to applied to the pod.[]
customVolumesList of custom volumes to add to deployment.[]
customResourcesCustom resources to add to helm chart deployment (make sure to separate each resource with ---).{}
secrets.logzioURLSecret with your Logz.io listener url.https://listener.logz.io:8053
secrets.logzioTokenSecret with your Logz.io metrics shipping token.""
configMap.dotnetMonitorThe dotnet-monitor configuration.See values.yaml.
configMap.opentelemetryThe opentelemetry configuration.See values.yaml.
  • To get additional information about dotnet-monitor configuration, click here.
  • To see well-known providers and their counters, click here.

Uninstalling the Chart​

The Uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.

To uninstall the dotnet-monitor-collector deployment, use the following command:

helm uninstall dotnet-monitor-collector

For troubleshooting this solution, see our .NET with helm troubleshooting guide.

Traces​

Deploy this integration to enable automatic instrumentation of your ASP.NET Core application using OpenTelemetry.

Architecture overview​

This integration includes:

  • Installing the OpenTelemetry ASP.NET Core instrumentation packages on your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Running your ASP.NET Core application in conjunction with the OpenTelemetry instrumentation

On deployment, the ASP.NET Core instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Setup auto-instrumentation for your locally hosted ASP.NET Core application and send traces to Logz.io​

Before you begin, you'll need:

  • An ASP.NET Core application without instrumentation
  • An active account with Logz.io
  • Port 4317 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.
note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download instrumentation packages​

Run the following command from the application directory:

dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code​

Add the following configuration to the beginning of the Startup.cs file:

using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

Add the following configuration to the Startup class:

public void ConfigureServices(IServiceCollection services)
{

AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");

})
);
}

Download and configure OpenTelemetry collector​

Create a dedicated directory on the host of your ASP.NET Core application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Start the collector​

Run the following command:

<path/to>/otelcol-contrib --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.

Run the application​

Run the application to generate traces.

Check Logz.io for your traces​

Give your traces some time to get from your system to ours, and then open Tracing.

Setup auto-instrumentation for your ASP.NET Core application using Docker and send traces to Logz.io​

This integration enables you to auto-instrument your ASP.NET Core application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.

Before you begin, you'll need:

  • An ASP.NET Core application without instrumentation
  • An active account with Logz.io
  • Port 4317 available on your host system
  • A name defined for your tracing service
Download instrumentation packages​

Run the following command from the application directory:

dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code​

Add the following configuration to the beginning of the Startup.cs file:

using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

Add the following configuration to the Startup class:

public void ConfigureServices(IServiceCollection services)
{

AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");

})
);
}

Pull the Docker image for the OpenTelemetry collector​

docker pull otel/opentelemetry-collector-contrib:0.78.0

Create a configuration file​

Create a file config.yaml with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"


exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Tail Sampling​

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

An example configuration file looks as follows:

receivers:  
otlp:
protocols:
grpc:
http:

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

The tail_sampling defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the spand latency - all traces slower than the threshold value will be filtered in.1000
sampling_percentageSampling percentage for the probabilistic policy.10
Run the container​

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux​
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows​
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Run the application​

note

Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.

Run the application to generate traces.

Check Logz.io for your traces​

Give your traces some time to get from your system to ours, and then open Tracing.