Skip to main content

.NET

Logs

Before you begin, you'll need:

  • log4net 2.0.8+.
  • .NET Core SDK version 2.0+.
  • .NET Framework version 4.6.1+.

Add the dependency

On Windows, navigate to your project folder, and run the following command:

Install-Package Logzio.DotNet.Log4net

On Mac or Linux, open Visual Studio, navigate to Project > Add NuGet Packages..., search and install Logzio.DotNet.Log4net.

Configure the appender in a configuration file

Use the sample configuration and edit it according to your needs. View log4net documentation for additional options.

<log4net>
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<token><<LOG-SHIPPING-TOKEN>></token>
<type>log4net</type>
<listenerUrl>https://<<LISTENER-HOST>>:8071</listenerUrl>
<bufferSize>100</bufferSize>
<bufferTimeout>00:00:05</bufferTimeout>
<retriesMaxAttempts>3</retriesMaxAttempts>
<retriesInterval>00:00:02</retriesInterval>
<gzip>true</gzip>
<debug>false</debug>
<jsonKeysCamelCase>false</jsonKeysCamelCase>
<addTraceContext>false</addTraceContext>
<useStaticHttpClient>false</useStaticHttpClient>

</appender>

<root>
<level value="INFO" />
<appender-ref ref="LogzioAppender" />
</root>
</log4net>

To enable JSON format logging, add the following to your configuration file:

<parseJsonMessage>true</parseJsonMessage>

Next, reference the configuration file in your code as shown in the example here.

Run the code:

using System.IO;
using log4net;
using log4net.Config;
using System.Reflection;

namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var logger = LogManager.GetLogger(typeof(Program));
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());

// Replace "App.config" with the config file that holds your log4net configuration
XmlConfigurator.Configure(logRepository, new FileInfo("App.config"));

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");

LogManager.Shutdown();
}
}
}

Configure the appender in the code

Use the sample configuration and edit it according to your needs. View log4net documentation for additional options.

var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("<<LISTENER-HOST>>");
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Root.Level = Level.All;
hierarchy.Configured = true;

Customize your code by adding the following:

Why?What?
Enable proxy routinglogzioAppender.AddProxyAddress("http://your.proxy.com:port");
Enable sending logs in JSON formatlogzioAppender.ParseJsonMessage(true);
Enable gzip compressionlogzioAppender.AddGzip(true); , logzioAppender.ActivateOptions(); , logzioAppender.JsonKeysCamelCase(false); , logzioAppender.AddTraceContext(false); , logzioAppender.UseStaticHttpClient(false);

Parameters

ParameterDescriptionDefault/Required
tokenYour Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.Required
listenerUrlListener URL and port. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.https://listener.logz.io:8071
typeThe log type, shipped as type field. Used by Logz.io for consistent parsing. Can't contain spaces.log4net
bufferSizeMaximum number of messages the logger will accumulate before sending them all as a bulk.100
bufferTimeoutMaximum time to wait for more log lines, as hh:mm:ss.fff.00:00:05
retriesMaxAttemptsMaximum number of attempts to connect to Logz.io.3
retriesIntervalTime to wait between retries, as hh:mm:ss.fff.00:00:02
gzipTo compress the data before shipping, true. Otherwise, false.false
debugTo print debug messages to the console and trace log, true. Otherwise, false.false
parseJsonMessageTo parse your message as JSON format, add this field and set it to true.false
proxyAddressProxy address to route your logs through.None
jsonKeysCamelCaseIf you have custom fields keys that start with a capital letter and want to see the fields with a capital letter in Logz.io, set this field to true.false
addTraceContextTo add trace context to each log, set this field to true.false
useStaticHttpClientTo use the same static HTTP/s client for sending logs, set this field to true.false

Custom fields

Add static keys and values to all log messages by including these custom fields under <appender>, as shown:

<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<customField>
<key>Environment</key>
<value>Production</value>
</customField>
<customField>
<key>Location</key>
<value>New Jerseay B1</value>
</customField>
</appender>

Extending the appender

To change or add fields to your logs, inherit the appender and override the ExtendValues method.

public class MyAppLogzioAppender : LogzioAppender
{
protected override void ExtendValues(LoggingEvent loggingEvent, Dictionary<string, object> values)
{
values["logger"] = "MyPrefix." + values["logger"];
values["myAppClientId"] = new ClientIdProvider().Get();
}
}

Update your configuration to use the new appender name, such as MyAppLogzioAppender.

Add trace context

note

The Trace Context feature does not support .NET Standard 1.3.

To correlate logs with trace context in OpenTelemetry, set <addTraceContext>true</addTraceContext> in your configuration file or use logzioAppender.AddTraceContext(true); in your code. This adds span id and trace id to your logs. For example:

using log4net;
using log4net.Core;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;

namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logger = LogManager.GetLogger(typeof(Program));
var logzioAppender = new LogzioAppender();

logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
logzioAppender.AddTraceContext(true);
logzioAppender.ActivateOptions();

hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
hierarchy.Root.Level = Level.All;

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");

LogManager.Shutdown();
}
}
}

Serverless platforms

For serverless functions, call the appender's flush method at the end to ensure logs are sent before execution finishes. Create a static appender in Startup.cs with UseStaticHttpClient set to true for consistent invocations.

For example:

Startup.cs

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using log4net;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;

[assembly: FunctionsStartup(typeof(LogzioLog4NetSampleApplication.Startup))]

namespace LogzioLog4NetSampleApplication
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
logzioAppender.ActivateOptions();
logzioAppender.UseStaticHttpClient(true);
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
}
}
}

FunctionApp.cs

using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using log4net;
using MicrosoftLogger = Microsoft.Extensions.Logging.ILogger;

namespace LogzioLog4NetSampleApplication
{
public class TimerTriggerCSharpLog4Net
{

private static readonly ILog logger = LogManager.GetLogger(typeof(TimerTriggerCSharpLog4Net));

[FunctionName("TimerTriggerCSharpLog4Net")]
public void Run([TimerTrigger("*/30 * * * * *")]TimerInfo myTimer, MicrosoftLogger msLog)
{
msLog.LogInformation($"Log4Net C# Timer trigger function executed at: {DateTime.Now}");

logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Flush(5000);

msLog.LogInformation($"Log4Net C# Timer trigger function finished at: {DateTime.Now}");
}
}
}

Metrics

Helm manages packages of preconfigured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application.

Sending metrics from nodes with taints

To ship metrics from nodes with taints, ensure the taint key values are included in your DaemonSet/Deployment configuration as follows:

tolerations:
- key:
operator:
value:
effect:

To determine if a node uses taints as well as to display the taint keys, run:

kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
note

You need to use Helm client with version v3.9.0 or above.

Standard configuration

1. Select the namespace

This integration deploys to the namespace specified in values.yaml. The default is logzio-dotnet-monitor.

To use a different namespace, run:

kubectl create namespace <<NAMESPACE>>
  • Replace <<NAMESPACE>> with the name of your namespace.

2. Add logzio-helm repo

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update

3. Run the Helm deployment code

helm install -n <<NAMESPACE>> \
--set secrets.logzioURL='<<LISTENER-HOST>>:8053' \
--set secrets.logzioToken='<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>' \
--set-file dotnetAppContainers='<<DOTNET_APP_CONTAINERS_FILE>>' \
logzio-dotnet-monitor logzio-helm/logzio-dotnet-monitor
  • Replace <<NAMESPACE>> with the namespace you selected for this integration. The default value is default. Replace <<LISTENER-HOST>> with the host for your region. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.
  • Replace <<DOTNET_APP_CONTAINERS_FILE>> with your .NET application containers file. Make sure your main .NET application container has the following volumeMount:
volumeMounts:
- mountPath: /tmp
name: diagnostics

4. Check Logz.io for your metrics

Allow some time for data ingestion, then open Logz.io. Search for your metrics in Logz.io by searching {job="dotnet-monitor-collector"}

Install the pre-built dashboard to enhance the observability of your metrics.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Customizing Helm chart parameters

  • Configure customization options

    Update the Helm chart parameters using the following options:

    • Specify parameters using the --set key=value[,key=value] argument to helm install or --set-file key=value[,key=value]

    • Edit the values.yaml

    • Override default values with your own my_values.yaml and apply it in the helm install command.

  • Customization parameters

ParameterDescriptionDefault
nameOverrideOverrides the Chart name for resources.""
fullnameOverrideOverrides the full name of the resources.""
apiVersions.deploymentDeployment API version.apps/v1
apiVersions.configmapConfigmap API version.v1
apiVersions.secretSecret API version.v1
namespaceChart's namespace.logzio-dotnet-monitor
replicaCountThe number of replicated pods, the deployment creates.1
labelsPod's labels.{}
annotationsPod's annotations.{}
customSpecsCustom spec fields to add to the deployment.{}
dotnetAppContainersList of your .NET application containers to add to the pod.[]
logzioDotnetMonitor.nameThe name of the container that collects and ships diagnostic metrics of your .NET application to Logz.io (sidecar)logzio-dotnet-monitor
logzioDotnetMonitor.image.nameName of the image that is going to run in logzioDotnetMonitor.name containerlogzio/logzio-dotnet-monitor
logzioDotnetMonitor.image.tagThe tag of the image that is going to run in logzioDotnetMonitor.name containerlatest
logzioDotnetMonitor.portsList of ports the logzioDotnetMonitor.name container exposes52325
tolerationsList of tolerations to apply to the pod.[]
customVolumesList of custom volumes to add to deployment.[]
customResourcesCustom resources to add to helm chart deployment (make sure to separate each resource with ---).{}
secrets.logzioURLSecret with your Logz.io listener url.https://listener.logz.io:8053
secrets.logzioTokenSecret with your Logz.io metrics shipping token.""
configMap.dotnetMonitorThe dotnet-monitor configuration.See values.yaml.
configMap.opentelemetryThe opentelemetry configuration.See values.yaml.
  • To get additional information about dotnet-monitor configuration, click here.
  • To see well-known providers and their counters, click here.

Uninstalling the Chart

To remove all Kubernetes components associated with the chart and delete the release, use the uninstall command.

To uninstall the dotnet-monitor-collector deployment, run:

helm uninstall dotnet-monitor-collector

Encounter an issue? See our .NET with helm troubleshooting guide.

Traces

Deploy this integration to enable automatic instrumentation of your ASP.NET Core application using OpenTelemetry.

Architecture overview

This integration includes:

  • Installing the OpenTelemetry ASP.NET Core instrumentation packages on your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Running your ASP.NET Core application in conjunction with the OpenTelemetry instrumentation

On deployment, the ASP.NET Core instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Setup auto-instrumentation for your locally hosted ASP.NET Core application and send traces to Logz.io

Before you begin, you'll need:

  • An ASP.NET Core application without instrumentation
  • An active Logz.io account
  • Port 4317 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.
note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download instrumentation packages

Run the following command from the application directory:

dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting

Enable instrumentation in the code

Add the following configuration to the beginning of the Startup.cs file:

using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

Add the following configuration to the Startup class:

public void ConfigureServices(IServiceCollection services)
{

AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");

})
);
}

Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your ASP.NET Core application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Start the collector

Run the following command:

<path/to>/otelcol-contrib --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.

Run the application

Run the application to generate traces.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Setup auto-instrumentation for your ASP.NET Core application using Docker and send traces to Logz.io

This integration enables you to auto-instrument your ASP.NET Core application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.

Before you begin, you'll need:

  • An ASP.NET Core application without instrumentation
  • An active Logz.io account
  • Port 4317 available on your host system
  • A name defined for your tracing service

Download instrumentation packages

Run the following command from the application directory:

dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting

Enable instrumentation in the code

Add the following configuration to the beginning of the Startup.cs file:

using OpenTelemetry.Resources;
using OpenTelemetry.Trace;

Add the following configuration to the Startup class:

public void ConfigureServices(IServiceCollection services)
{

AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);

services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");

})
);
}

Pull the Docker image for the OpenTelemetry collector

docker pull otel/opentelemetry-collector-contrib:0.78.0

Create a configuration file

Create a file config.yaml with the following content:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"


exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

logging:

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Tail Sampling

tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.

The configurable parameters in the Logz.io default configuration are:

ParameterDescriptionDefault
threshold_msThreshold for the span latency - traces slower than this value will be included.1000
sampling_percentagePercentage of traces to sample using the probabilistic policy.10

If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:

  • Under the exporters list
  logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
  • Under the service list:
  extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Here is an example configuration file:

receivers:  
otlp:
protocols:
grpc:
http:

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Run the container

Mount the config.yaml as volume to the docker run command and run it as follows.

Linux
docker run  \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0

Replace <PATH-TO> to the path to the config.yaml file on your system.

Windows
docker run  \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Run the application

note

When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.

Run the application to generate traces.

Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.