.NET
Logs
- log4net
- NLog
- LoggerFactory
- Serilog
Before you begin, you'll need:
- log4net 2.0.8 or higher
- .NET Core SDK version 2.0 or higher
- .NET Framework version 4.6.1 or higher
Add the dependency to your project
If you're on Windows, navigate to your project's folder in the command line, and run this command to install the dependency.
Install-Package Logzio.DotNet.Log4net
If you're on a Mac or Linux machine, you can install the package using Visual Studio. Select Project > Add NuGet Packages..., and then search for Logzio.DotNet.Log4net
.
Configure the appender
You can configure the appender in a configuration file or directly in the code. Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See log4net documentation 🔗 to learn more about configuration options.
For a complete list of options, see the configuration parameters below the code blocks.👇
Option 1: In a configuration file
<log4net>
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<!--
Required fields
-->
<!-- Your Logz.io log shipping token -->
<token><<LOG-SHIPPING-TOKEN>></token>
<!--
Optional fields (with their default values)
-->
<!-- The type field will be added to each log message, making it
easier for you to differ between different types of logs. -->
<type>log4net</type>
<!-- The URL of the Logz.io listener -->
<listenerUrl>https://<<LISTENER-HOST>>:8071</listenerUrl>
<!--Optional proxy server address:
proxyAddress = "http://your.proxy.com:port" -->
<!-- The maximum number of log lines to send in each bulk -->
<bufferSize>100</bufferSize>
<!-- The maximum time to wait for more log lines, in a hh:mm:ss.fff format -->
<bufferTimeout>00:00:05</bufferTimeout>
<!-- If connection to Logz.io API fails, how many times to retry -->
<retriesMaxAttempts>3</retriesMaxAttempts>
<!-- Time to wait between retries, in a hh:mm:ss.fff format -->
<retriesInterval>00:00:02</retriesInterval>
<!-- Set the appender to compress the message before sending it -->
<gzip>true</gzip>
<!-- Uncomment this to enable sending logs in Json format -->
<!--<parseJsonMessage>true</parseJsonMessage>-->
<!-- Enable the appender's internal debug logger (sent to the console output and trace log) -->
<debug>false</debug>
<!-- If you have custom fields keys that start with capital letter and want to see the fields
with capital letter in Logz.io, set this field to true. The default is false
(first letter will be small letter). -->
<jsonKeysCamelCase>false</jsonKeysCamelCase>
<!-- Add trace context (traceId and spanId) to each log. The default is false -->
<addTraceContext>false</addTraceContext>
<!-- Use the same static HTTP/s client for sending logs. The default is false -->
<useStaticHttpClient>false</useStaticHttpClient>
</appender>
<root>
<level value="INFO" />
<appender-ref ref="LogzioAppender" />
</root>
</log4net>
Add a reference to the configuration file in your code, as shown in the example here.
Code sample
using System.IO;
using log4net;
using log4net.Config;
using System.Reflection;
namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var logger = LogManager.GetLogger(typeof(Program));
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
// Replace "App.config" with the config file that holds your log4net configuration
XmlConfigurator.Configure(logRepository, new FileInfo("App.config"));
logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Shutdown();
}
}
}
Option 2: In the code
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("<<LISTENER-HOST>>");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false);
// logzioAppender.AddTraceContext(false);
// logzioAppender.UseStaticHttpClient(false);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Root.Level = Level.All;
hierarchy.Configured = true;
Code sample
using log4net;
using log4net.Core;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;
namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logger = LogManager.GetLogger(typeof(Program));
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false)
// logzioAppender.AddTraceContext(false);
// logzioAppender.UseStaticHttpClient(false);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
hierarchy.Root.Level = Level.All;
logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Shutdown();
}
}
}
Parameters
Parameter | Description | Default/Required |
---|---|---|
token | Your Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
listenerUrl | Listener URL and port. Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | https://listener.logz.io:8071 |
type | The log type, shipped as type field. Used by Logz.io for consistent parsing. Can't contain spaces. | log4net |
bufferSize | Maximum number of messages the logger will accumulate before sending them all as a bulk. | 100 |
bufferTimeout | Maximum time to wait for more log lines, as hh:mm:ss.fff. | 00:00:05 |
retriesMaxAttempts | Maximum number of attempts to connect to Logz.io. | 3 |
retriesInterval | Time to wait between retries, as hh:mm:ss.fff. | 00:00:02 |
gzip | To compress the data before shipping, true . Otherwise, false . | false |
debug | To print debug messages to the console and trace log, true . Otherwise, false . | false |
parseJsonMessage | To parse your message as JSON format, add this field and set it to true . | false |
proxyAddress | Proxy address to route your logs through. | None |
jsonKeysCamelCase | If you have custom fields keys that start with a capital letter and want to see the fields with a capital letter in Logz.io, set this field to true. | false |
addTraceContext | If want to add trace context to each log, set this field to true. | false |
useStaticHttpClient | If want to use the same static HTTP/s client for sending logs, set this field to true. | false |
Custom fields
You can add static keys and values to be added to all log messages.
These custom fields must be children of <appender>
, as shown here.
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<customField>
<key>Environment</key>
<value>Production</value>
</customField>
<customField>
<key>Location</key>
<value>New Jerseay B1</value>
</customField>
</appender>
Extending the appender
To change or add fields to your logs, inherit the appender and override the ExtendValues
method.
public class MyAppLogzioAppender : LogzioAppender
{
protected override void ExtendValues(LoggingEvent loggingEvent, Dictionary<string, object> values)
{
values["logger"] = "MyPrefix." + values["logger"];
values["myAppClientId"] = new ClientIdProvider().Get();
}
}
Change your configuration to use your new appender name.
For the example above, you'd use MyAppLogzioAppender
.
Add trace context
The Trace Context feature does not support .NET Standard 1.3.
If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: span id
and trace id
. To enable this feature, set <addTraceContext>true</addTraceContext>
in your configuration file or logzioAppender.AddTraceContext(true);
in your code. For example:
using log4net;
using log4net.Core;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;
namespace dotnet_log4net
{
class Program
{
static void Main(string[] args)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logger = LogManager.GetLogger(typeof(Program));
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
// <-- Uncomment and edit this line to enable proxy routing: -->
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// <-- Uncomment this to enable sending logs in Json format -->
// logzioAppender.ParseJsonMessage(true);
// <-- Uncomment these lines to enable gzip compression -->
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false)
logzioAppender.AddTraceContext(true);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
hierarchy.Root.Level = Level.All;
logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Shutdown();
}
}
}
Serverless platforms
If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the UseStaticHttpClient
flag set to true
.
Azure serverless function code sample
Startup.cs
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using log4net;
using log4net.Repository.Hierarchy;
using Logzio.DotNet.Log4net;
[assembly: FunctionsStartup(typeof(LogzioLog4NetSampleApplication.Startup))]
namespace LogzioLog4NetSampleApplication
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("https://<<LISTENER-HOST>>:8071");
logzioAppender.ActivateOptions();
logzioAppender.UseStaticHttpClient(true);
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Configured = true;
}
}
}
FunctionApp.cs
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using log4net;
using MicrosoftLogger = Microsoft.Extensions.Logging.ILogger;
namespace LogzioLog4NetSampleApplication
{
public class TimerTriggerCSharpLog4Net
{
private static readonly ILog logger = LogManager.GetLogger(typeof(TimerTriggerCSharpLog4Net));
[FunctionName("TimerTriggerCSharpLog4Net")]
public void Run([TimerTrigger("*/30 * * * * *")]TimerInfo myTimer, MicrosoftLogger msLog)
{
msLog.LogInformation($"Log4Net C# Timer trigger function executed at: {DateTime.Now}");
logger.Info("Now I don't blame him 'cause he run and hid");
logger.Info("But the meanest thing he ever did");
logger.Info("Before he left was he went and named me Sue");
LogManager.Flush(5000);
msLog.LogInformation($"Log4Net C# Timer trigger function finished at: {DateTime.Now}");
}
}
}
Before you begin, you'll need:
- NLog 4.5.0 or higher
- .NET Core SDK version 2.0 or higher
- .NET Framework version 4.6.1 or higher
Add the dependency to your project
If you're on Windows, navigate to your project's folder in the command line, and run this command to install the dependency.
Install-Package Logzio.DotNet.NLog
If you’re on a Mac or Linux machine, you can install the package using Visual Studio. Select Project > Add NuGet Packages..., and then search for Logzio.DotNet.NLog
.
Configure the appender
You can configure the appender in a configuration file or directly in the code. Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See NLog documentation 🔗 to learn more about configuration options.
For a complete list of options, see the configuration parameters below the code blocks.👇
Option 1: In a configuration file
<nlog>
<extensions>
<add assembly="Logzio.DotNet.NLog"/>
</extensions>
<targets>
<!-- parameters are shown here with their default values.
Other than the token, all of the fields are optional and can be safely omitted.
-->
<target name="logzio" type="Logzio"
token="<<LOG-SHIPPING-TOKEN>>"
logzioType="nlog"
listenerUrl="<<LISTENER-HOST>>:8071"
<!--Optional proxy server address:
proxyAddress = "http://your.proxy.com:port" -->
bufferSize="100"
bufferTimeout="00:00:05"
retriesMaxAttempts="3"
retriesInterval="00:00:02"
includeEventProperties="true"
useGzip="false"
debug="false"
jsonKeysCamelCase="false"
addTraceContext="false"
<!-- parseJsonMessage="true"-->
<!-- useStaticHttpClient="true"-->
>
<contextproperty name="host" layout="${machinename}" />
<contextproperty name="threadid" layout="${threadid}" />
</target>
</targets>
<rules>
<logger name="*" minlevel="Info" writeTo="logzio" />
</rules>
</nlog>
Option 2: In the code
var config = new LoggingConfiguration();
// Replace these parameters with your configuration
var logzioTarget = new LogzioTarget {
Name = "Logzio",
Token = "<<LOG-SHIPPING-TOKEN>>",
LogzioType = "nlog",
ListenerUrl = "<<LISTENER-HOST>>:8071",
BufferSize = 100,
BufferTimeout = TimeSpan.Parse("00:00:05"),
RetriesMaxAttempts = 3,
RetriesInterval = TimeSpan.Parse("00:00:02"),
Debug = false,
JsonKeysCamelCase = false,
AddTraceContext = false,
// ParseJsonMessage = true,
// ProxyAddress = "http://your.proxy.com:port",
// UseStaticHttpClient = false,
};
config.AddRule(LogLevel.Debug, LogLevel.Fatal, logzioTarget);
LogManager.Configuration = config;
Parameters
Parameter | Description | Default/Required |
---|---|---|
token | Your Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
listenerUrl | Listener URL and port. Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | https://listener.logz.io:8071 |
type | The log type, shipped as type field. Used by Logz.io for consistent parsing. Can't contain spaces. | nlog |
bufferSize | Maximum number of messages the logger will accumulate before sending them all as a bulk. | 100 |
bufferTimeout | Maximum time to wait for more log lines, as hh:mm:ss.fff. | 00:00:05 |
retriesMaxAttempts | Maximum number of attempts to connect to Logz.io. | 3 |
retriesInterval | Time to wait between retries, as hh:mm:ss.fff. | 00:00:02 |
debug | To print debug messages to the console and trace log, true . Otherwise, false . | false |
parseJsonMessage | To parse your message as JSON format, add this field and set it to true . | false |
proxyAddress | Proxy address to route your logs through. | None |
jsonKeysCamelCase | If you have custom fields keys that start with a capital letter and want to see the fields with a capital letter in Logz.io, set this field to true. | false |
addTraceContext | If want to add trace context to each log, set this field to true. | false |
useStaticHttpClient | If want to use the same static HTTP/s client for sending logs, set this field to true. | false |
Code sample
using System;
using System.IO;
using Logzio.DotNet.NLog;
using NLog;
using NLog.Config;
using NLog.Fluent;
namespace LogzioNLogSampleApplication
{
public class Program
{
static void Main(string[] args)
{
var logger = LogManager.GetCurrentClassLogger();
logger.Info()
.Message("If you'll be my bodyguard")
.Property("iCanBe", "your long lost pal")
.Property("iCanCallYou", "Betty, and Betty when you call me")
.Property("youCanCallMe", "Al")
.Write();
LogManager.Shutdown();
}
}
}
Include context properties
You can configure the target to include your own custom values when forwarding logs to Logz.io. For example:
<nlog>
<variable name="site" value="New Zealand" />
<variable name="rings" value="one" />
<target name="logzio" type="Logzio" token="<<LOG-SHIPPING-TOKEN>>">
<contextproperty name="site" layout="${site}" />
<contextproperty name="rings" layout="${rings}" />
</target>
</nlog>
Extending the appender
To change or add fields to your logs, inherit the appender and override the ExtendValues
method.
[Target("MyAppLogzio")]
public class MyAppLogzioTarget : LogzioTarget
{
protected override void ExtendValues(LogEventInfo logEvent, Dictionary<string, object> values)
{
values["logger"] = "MyPrefix." + values["logger"];
values["myAppClientId"] = new ClientIdProvider().Get();
}
}
Change your configuration to use your new target. For the example above, you'd use MyAppLogzio
.
Json Layout
When using 'JsonLayout' set the name of the attribute to other than 'message'. for example:
<layout type="JsonLayout" includeAllProperties="true">
<attribute name="msg" layout="${message}" encode="false"/>
</layout>
Add trace context
The Trace Context feature does not support .NET Standard 1.3.
If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: span id
and trace id
. To enable this feature, set addTraceContext="true"
in your configuration file or AddTraceContext = true
in your code. For example:
var config = new LoggingConfiguration();
// Replace these parameters with your configuration
var logzioTarget = new LogzioTarget {
Name = "Logzio",
Token = "<<LOG-SHIPPING-TOKEN>>",
LogzioType = "nlog",
ListenerUrl = "<<LISTENER-HOST>>:8071",
BufferSize = 100,
BufferTimeout = TimeSpan.Parse("00:00:05"),
RetriesMaxAttempts = 3,
RetriesInterval = TimeSpan.Parse("00:00:02"),
Debug = false,
JsonKeysCamelCase = false,
AddTraceContext = true,
// ParseJsonMessage = true,
// ProxyAddress = "http://your.proxy.com:port"
};
config.AddRule(LogLevel.Debug, LogLevel.Fatal, logzioTarget);
LogManager.Configuration = config;
Serverless platforms
If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the UseStaticHttpClient
flag set to true
.
Azure serverless function code sample
Startup.cs
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Logzio.DotNet.NLog;
using NLog;
using NLog.Config;
using System;
[assembly: FunctionsStartup(typeof(LogzioNLogSampleApplication.Startup))]
namespace LogzioNLogSampleApplication
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var config = new LoggingConfiguration();
// Replace these parameters with your configuration
var logzioTarget = new LogzioTarget
{
Name = "Logzio",
Token = "<<LOG-SHIPPING-TOKEN>>",
LogzioType = "nlog",
ListenerUrl = "https://<<LISTENER-HOST>>:8071",
BufferSize = 100,
BufferTimeout = TimeSpan.Parse("00:00:05"),
RetriesMaxAttempts = 3,
RetriesInterval = TimeSpan.Parse("00:00:02"),
Debug = false,
JsonKeysCamelCase = false,
AddTraceContext = false,
UseStaticHttpClient = true,
// ParseJsonMessage = true,
// ProxyAddress = "http://your.proxy.com:port",
};
config.AddRule(NLog.LogLevel.Debug, NLog.LogLevel.Fatal, logzioTarget);
LogManager.Configuration = config;
}
}
}
FunctionApp.cs
using System;
using Microsoft.Azure.WebJobs;
using NLog;
using Microsoft.Extensions.Logging;
using MicrosoftLogger = Microsoft.Extensions.Logging.ILogger;
namespace LogzioNLogSampleApplication
{
public class TimerTriggerCSharpNLog
{
private static readonly Logger nLog = LogManager.GetCurrentClassLogger();
[FunctionName("TimerTriggerCSharpNLog")]
public void Run([TimerTrigger("*/30 * * * * *")]TimerInfo myTimer, MicrosoftLogger msLog)
{
msLog.LogInformation($"NLogzio C# Timer trigger function executed at: {DateTime.Now}");
nLog.WithProperty("iCanBe", "your long lost pal")
.WithProperty("iCanCallYou", "Betty, and Betty when you call me")
.WithProperty("youCanCallMe", "Al")
.Info("If you'll be my bodyguard");
// Call Flush method before function trigger finishes
LogManager.Flush(5000);
}
}
}
Before you begin, you'll need:
- log4net 2.0.8 or higher
- .NET Core SDK version 2.0 or higher
- .NET Framework version 4.6.1 or higher
Add the dependency to your project
If you're on Windows, navigate to your project's folder in the command line, and run these commands to install the dependencies.
Install-Package Logzio.DotNet.Log4net
Install-Package Microsoft.Extensions.Logging.Log4Net.AspNetCore
If you're on a Mac or Linux machine, you can install the package using Visual Studio. Select Project > Add NuGet Packages..., and then search for Logzio.DotNet.Log4net
and Microsoft.Extensions.Logging.Log4Net.AspNetCore
.
Configure the appender
You can configure the appender in a configuration file or directly in the code. Use the samples in the code blocks below as a starting point, and replace them with a configuration that matches your needs. See log4net documentation 🔗 to learn more about configuration options.
For a complete list of options, see the configuration parameters below the code blocks.👇
Option 1: In a configuration file
<log4net>
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<!--
Required fields
-->
<!-- Your Logz.io log shipping token -->
<token><<LOG-SHIPPING-TOKEN>></token>
<!--
Optional fields (with their default values)
-->
<!-- The type field will be added to each log message, making it
easier for you to differ between different types of logs. -->
<type>log4net</type>
<!-- The URL of the Logz.io listener -->
<listenerUrl>https://<<LISTENER-HOST>>:8071</listenerUrl>
<!--Optional proxy server address:
proxyAddress = "http://your.proxy.com:port" -->
<!-- The maximum number of log lines to send in each bulk -->
<bufferSize>100</bufferSize>
<!-- The maximum time to wait for more log lines, in a hh:mm:ss.fff format -->
<bufferTimeout>00:00:05</bufferTimeout>
<!-- If connection to Logz.io API fails, how many times to retry -->
<retriesMaxAttempts>3</retriesMaxAttempts>
<!-- Time to wait between retries, in a hh:mm:ss.fff format -->
<retriesInterval>00:00:02</retriesInterval>
<!-- Set the appender to compress the message before sending it -->
<gzip>true</gzip>
<!-- Enable the appender's internal debug logger (sent to the console output and trace log) -->
<debug>false</debug>
<!-- Set to true if you want json keys in Logz.io to be in camel case. The default is false. -->
<jsonKeysCamelCase>false</jsonKeysCamelCase>
<!-- Add trace context (traceId and spanId) to each log. The default is false -->
<addTraceContext>false</addTraceContext>
<!-- Use the same static HTTP/s client for sending logs. The default is false -->
<useStaticHttpClient>false</useStaticHttpClient>
</appender>
<root>
<level value="INFO" />
<appender-ref ref="LogzioAppender" />
</root>
</log4net>
Option 2: In the code
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("<<LISTENER-HOST>>");
// Uncomment and edit this line to enable proxy routing:
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// Uncomment these lines to enable gzip compression
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false);
// logzioAppender.AddTraceContext(false);
// logzioAppender.UseStaticHttpClient(false);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Root.Level = Level.All;
hierarchy.Configured = true;
Parameters
Parameter | Description | Default/Required |
---|---|---|
token | Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
listenerUrl | Listener URL and port. Replace <<LISTENER-HOST>> with the host for your region. For example, listener.logz.io if your account is hosted on AWS US East, or listener-nl.logz.io if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | https://listener.logz.io:8071 |
type | The log type, shipped as type field. Used by Logz.io for consistent parsing. Can't contain spaces. | log4net |
bufferSize | Maximum number of messages the logger will accumulate before sending them all in bulk. | 100 |
bufferTimeout | Maximum time to wait for more log lines, as hh:mm:ss.fff. | 00:00:05 |
retriesMaxAttempts | Maximum number of attempts to connect to Logz.io. | 3 |
retriesInterval | Time to wait between retries, as hh:mm:ss.fff. | 00:00:02 |
gzip | To compress the data before shipping, true . Otherwise, false . | false |
debug | To print debug messages to the console and trace log, true . Otherwise, false . | false |
parseJsonMessage | To parse your message as JSON format, add this field and set it to true . | false |
proxyAddress | Proxy address to route your logs through. | None |
jsonKeysCamelCase | If you have custom fields keys that start with capital letter and want to see the fields with capital letter in Logz.io, set this field to true. | false |
addTraceContext | If want to add trace context to each log, set this field to true. | false |
useStaticHttpClient | If want to use the same static HTTP/s client for sending logs, set this field to true. | false |
Code sample
ASP.NET Core
Update Startup.cs file in Configure method to include the Log4Net middleware as in the code below.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILoggerFactory loggerFactory)
{
...
loggerFactory.AddLog4Net();
...
}
In the Controller, add Data Member and Constructor, as in the code below.
private readonly ILoggerFactory _loggerFactory;
public ExampleController(ILoggerFactory loggerFactory, ...)
{
_loggerFactory = loggerFactory;
...
}
In the Controller methods:
[Route("<PUT_HERE_YOUR_ROUTE>")]
public ActionResult ExampleMethod()
{
var logger = _loggerFactory.CreateLogger<ExampleController>();
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
// Replace "App.config" with the config file that holds your log4net configuration
XmlConfigurator.Configure(logRepository, new FileInfo("log4net.config"));
logger.LogInformation("Hello");
logger.LogInformation("Is it me you looking for?");
LogManager.Shutdown();
return Ok();
}
.NET Core Desktop Application
using System.IO;
using System.Reflection;
using log4net;
using log4net.Config;
using Microsoft.Extensions.Logging;
namespace LoggerFactoryAppender
{
class Program
{
static void Main(string[] args)
{
ILoggerFactory loggerFactory = new LoggerFactory();
loggerFactory.AddLog4Net();
var logger = loggerFactory.CreateLogger<Program>();
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
// Replace "App.config" with the config file that holds your log4net configuration
XmlConfigurator.Configure(logRepository, new FileInfo("log4net.config"));
logger.LogInformation("Hello");
logger.LogInformation("Is it me you looking for?");
LogManager.Shutdown();
}
}
}
Custom fields
You can add static keys and values to all log messages.
These custom fields must be children of <appender>
, as shown in the code below.
<appender name="LogzioAppender" type="Logzio.DotNet.Log4net.LogzioAppender, Logzio.DotNet.Log4net">
<customField>
<key>Environment</key>
<value>Production</value>
</customField>
<customField>
<key>Location</key>
<value>New Jerseay B1</value>
</customField>
</appender>
Extending the appender
To change or add fields to your logs, inherit the appender and override the ExtendValues
method.
public class MyAppLogzioAppender : LogzioAppender
{
protected override void ExtendValues(LoggingEvent loggingEvent, Dictionary<string, object> values)
{
values["logger"] = "MyPrefix." + values["logger"];
values["myAppClientId"] = new ClientIdProvider().Get();
}
}
Change your configuration to use your new appender name.
For the example above, you'd use MyAppLogzioAppender
.
Add trace context
The Trace Context feature does not support .NET Standard 1.3.
If you’re sending traces with OpenTelemetry instrumentation (auto or manual), you can correlate your logs with the trace context. In this way, your logs will have traces data in it: span id
and trace id
. To enable this feature, set addTraceContext="true"
in your configuration file or AddTraceContext = true
in your code. For example:
var hierarchy = (Hierarchy)LogManager.GetRepository();
var logzioAppender = new LogzioAppender();
logzioAppender.AddToken("<<LOG-SHIPPING-TOKEN>>");
logzioAppender.AddListenerUrl("<<LISTENER-HOST>>");
// Uncomment and edit this line to enable proxy routing:
// logzioAppender.AddProxyAddress("http://your.proxy.com:port");
// Uncomment these lines to enable gzip compression
// logzioAppender.AddGzip(true);
// logzioAppender.ActivateOptions();
// logzioAppender.JsonKeysCamelCase(false);
logzioAppender.AddTraceContext(true);
logzioAppender.ActivateOptions();
hierarchy.Root.AddAppender(logzioAppender);
hierarchy.Root.Level = Level.All;
hierarchy.Configured = true;
Serverless platforms
If you’re using a serverless function, you’ll need to call the appender's flush method at the end of the function run to make sure the logs are sent before the function finishes its execution. You’ll also need to create a static appender in the Startup.cs file so each invocation will use the same appender. The appender should have the UseStaticHttpClient
flag set to true
.
Azure serverless function code sample
Startup.cs
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using log4net;
using log4net.Config;
using System.IO;
using System.Reflection;
using System;
[assembly: FunctionsStartup(typeof(LogzioLoggerFactorySampleApplication.Startup))]
namespace LogzioLoggerFactorySampleApplication
{
public class Startup : FunctionsStartup
{
public static ILoggerFactory LoggerFactory { get; set; }
public override void Configure(IFunctionsHostBuilder builder)
{
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
string functionAppDirectory = Environment.GetEnvironmentVariable("AzureWebJobsScriptRoot");
// Configure log4net here
XmlConfigurator.Configure(logRepository, new FileInfo(Path.Combine(functionAppDirectory, "log4net.config")));
var loggerFactory = new LoggerFactory();
loggerFactory.AddLog4Net(Path.Combine(functionAppDirectory, "log4net.config")); // Use the log4net.config in the function app root directory
LoggerFactory = loggerFactory;
}
}
}
FunctionApp.cs
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using log4net;
using System;
namespace LogzioLoggerFactorySampleApplication
{
public class TimerTriggerCSharpLoggerFactory
{
private readonly ILogger _logger = Startup.LoggerFactory.CreateLogger<TimerTriggerCSharpLoggerFactory>();
[FunctionName("TimerTriggerCSharpLoggerFactory")]
public void Run([TimerTrigger("*/30 * * * * *")] TimerInfo myTimer)
{
_logger.LogInformation($"LoggerFactory C# Timer trigger function executed at: {DateTime.Now}");
_logger.LogInformation("Hello");
_logger.LogInformation("Is it me you looking for?");
LogManager.Flush(5000);
_logger.LogInformation($"LoggerFactory C# Timer trigger function finished at: {DateTime.Now}");
}
}
}
This integration is based on Serilog.Sinks.Logz.Io repository. Refer to this repo for further usage and settings information.
Before you begin, you'll need:
- .NET Core SDK version 2.0 or higher
- .NET Framework version 4.6.1 or higher
Install the Logz.io Serilog sink
Install Serilog.Sinks.Logz.Io
using Nuget or by running the following command in the Package Manager Console:
PM> Install-Package Serilog.Sinks.Logz.Io
Configure the sink
There are 2 ways to use Serilog:
- Using a configuration file
- In the code
Using a configuration file
Create appsettings.json
file and copy the following configuration:
{
"Serilog": {
"MinimumLevel": "Warning",
"WriteTo": [
{
"Name": "LogzIoDurableHttp",
"Args": {
"requestUri": "https://<<LISTENER-HOST>>:8071/?type=<<TYPE>>&token=<<LOG-SHIPPING-TOKEN>>"
}
}
]
}
}
Replace <<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
Replace <<TYPE>
with the type that you want to assign to your logs. You will use this value to identify these logs in Logz.io.
Add the following code to use the configuration and create logs:
- Using Serilog.Settings.Configuration and Microsoft.Extensions.Configuration.Json packages
using System.IO;
using System.Threading;
using Microsoft.Extensions.Configuration;
using Serilog;
namespace Example
{
class Program
{
static void Main(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json")
.Build();
var logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
logger.Information("Hello. Is it me you looking for?");
Thread.Sleep(5000); // gives the log enough time to be sent to Logz.io
}
}
}
In the code
using System.Threading;
using Serilog;
using Serilog.Sinks.Logz.Io;
namespace Example
{
class Program
{
static void Main(string[] args)
{
ILogger logger = new LoggerConfiguration()
.WriteTo.LogzIoDurableHttp(
"https://<<LISTENER-HOST>>:8071/?type=<<TYPE>>&token=<<LOG-SHIPPING-TOKEN>>",
logzioTextFormatterOptions: new LogzioTextFormatterOptions
{
BoostProperties = true,
LowercaseLevel = true,
IncludeMessageTemplate = true,
FieldNaming = LogzIoTextFormatterFieldNaming.CamelCase,
EventSizeLimitBytes = 261120,
})
.MinimumLevel.Verbose()
.CreateLogger();
logger.Information("Hello. Is it me you looking for?");
Thread.Sleep(5000); // gives the log enough time to be sent to Logz.io
}
}
}
Serverless platforms
If you’re using a serverless function, you’ll need to create a static appender in the Startup.cs file so each invocation will use the same appender. In the Serilog integration, you should use the 'WriteTo.LogzIo()' instad of 'WriteTo.LogzIoDurableHttp()' method as it uses in-memory buffering which is best practice for serverless functions.
Azure serverless function code sample
Startup.cs
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Serilog;
using Serilog.Sinks.Logz.Io;
[assembly: FunctionsStartup(typeof(LogzioSerilogSampleApplication.Startup))]
namespace LogzioSerilogSampleApplication
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var logzioLogger = new LoggerConfiguration()
.WriteTo.LogzIo("<<LOG-SHIPPING-TOKEN>>", "serilog", dataCenterSubDomain: "listener", useHttps: true)
.CreateLogger();
// Assign the logger to the static Log class
Log.Logger = logzioLogger;
}
}
}
FunctionApp.cs
using System;
using Microsoft.Azure.WebJobs;
using Serilog;
using Microsoft.Extensions.Logging;
using MicrosoftLogger = Microsoft.Extensions.Logging.ILogger;
using Serilogger = Serilog.ILogger;
using System.Threading;
namespace LogzioSerilogSampleApplication
{
public class TimerTriggerCSharpSeriLogzio
{
private static readonly Serilogger logzioLogger = Log.Logger;
[FunctionName("TimerTriggerCSharpSeriLogzio")]
public void Run([TimerTrigger("*/30 * * * * *")]TimerInfo myTimer, MicrosoftLogger msLog)
{
msLog.LogInformation($"Serilog C# Timer trigger function executed at: {DateTime.Now}");
logzioLogger.Information("Hello. Is it me you're looking for?");
msLog.LogInformation($"Serilog C# Timer trigger function finished at: {DateTime.Now}");
}
}
}
Replace <<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.
Replace <<TYPE>
with the type that you want to assign to your logs. You will use this value to identify these logs in Logz.io.
Metrics
- Kubernetes
- SDK
Helm is a tool for managing packages of pre-configured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application.
Sending metrics from nodes with taints
If you want to ship metrics from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows:
tolerations:
- key:
operator:
value:
effect:
To determine if a node uses taints as well as to display the taint keys, run:
kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
:::node
You need to use Helm
client with version v3.9.0
or above.
:::
Standard configuration
Select the namespace
This integration will be deployed in the namespace you set in values.yaml. The default namespace for this integration is logzio-dotnet-monitor.
To select a different namespace, run:
kubectl create namespace <<NAMESPACE>>
- Replace
<<NAMESPACE>>
with the name of your namespace.
Add logzio-helm
repo
helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code
helm install -n <<NAMESPACE>> \
--set secrets.logzioURL='<<LISTENER-HOST>>:8053' \
--set secrets.logzioToken='<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>' \
--set-file dotnetAppContainers='<<DOTNET_APP_CONTAINERS_FILE>>' \
logzio-dotnet-monitor logzio-helm/logzio-dotnet-monitor
- Replace
<<NAMESPACE>>
with the namespace you selected for this integration. The default value isdefault
. Replace<<LISTENER-HOST>>
with the host for your region. For example,listener.logz.io
if your account is hosted on AWS US East, orlistener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace<<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to. - Replace
<<DOTNET_APP_CONTAINERS_FILE>>
with your .NET application containers file. Make sure your main .NET application container has the following volumeMount:
volumeMounts:
- mountPath: /tmp
name: diagnostics
Check Logz.io for your metrics
Give your metrics some time to get from your system to ours, then open Logz.io. You can search for your metrics in Logz.io by searching {job="dotnet-monitor-collector"}
Customizing Helm chart parameters
Configure customization options
You can use the following options to update the Helm chart parameters:
Specify parameters using the
--set key=value[,key=value]
argument tohelm install
or--set-file key=value[,key=value]
Edit the
values.yaml
Overide default values with your own
my_values.yaml
and apply it in thehelm install
command.
Customization parameters
Parameter | Description | Default |
---|---|---|
nameOverride | Overrides the Chart name for resources. | "" |
fullnameOverride | Overrides the full name of the resources. | "" |
apiVersions.deployment | Deployment API version. | apps/v1 |
apiVersions.configmap | Configmap API version. | v1 |
apiVersions.secret | Secret API version. | v1 |
namespace | Chart's namespace. | logzio-dotnet-monitor |
replicaCount | The number of replicated pods, the deployment creates. | 1 |
labels | Pod's labels. | {} |
annotations | Pod's annotations. | {} |
customSpecs | Custom spec fields to add to the deployment. | {} |
dotnetAppContainers | List of your .NET application containers to add to the pod. | [] |
logzioDotnetMonitor.name | The name of the container that collects and ships diagnostic metrics of your .NET application to Logz.io (sidecar) | logzio-dotnet-monitor |
logzioDotnetMonitor.image.name | Name of the image that is going to run in logzioDotnetMonitor.name container | logzio/logzio-dotnet-monitor |
logzioDotnetMonitor.image.tag | The tag of the image that is going to run in logzioDotnetMonitor.name container | latest |
logzioDotnetMonitor.ports | List of ports the logzioDotnetMonitor.name container exposes | 52325 |
tolerations | List of tolerations to applied to the pod. | [] |
customVolumes | List of custom volumes to add to deployment. | [] |
customResources | Custom resources to add to helm chart deployment (make sure to separate each resource with --- ). | {} |
secrets.logzioURL | Secret with your Logz.io listener url. | https://listener.logz.io:8053 |
secrets.logzioToken | Secret with your Logz.io metrics shipping token. | "" |
configMap.dotnetMonitor | The dotnet-monitor configuration. | See values.yaml. |
configMap.opentelemetry | The opentelemetry configuration. | See values.yaml. |
- To get additional information about dotnet-monitor configuration, click here.
- To see well-known providers and their counters, click here.
Uninstalling the Chart
The Uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.
To uninstall the dotnet-monitor-collector
deployment, use the following command:
helm uninstall dotnet-monitor-collector
For troubleshooting this solution, see our .NET with helm troubleshooting guide.
You can send custom metrics from your .NET Core application using Logzio.App.Metrics. Logzio.App.Metrics is an open-source and cross-platform .NET library used to record metrics within an application and forward the data to Logz.io.
These instructions show you how to:
- Create a basic custom metrics export configuration with a hardcoded Logz.io exporter
- Create a basic custom metrics export configuration with a Logz.io exporter defined by a configuration file
- Add advanced settings to the basic custom metrics export configuration
Send custom metrics to Logz.io with a hardcoded Logz.io exporter
Before you begin, you'll need:
- An application in .NET Core 3.1 or higher
- An active Logz.io account
Install the App.Metrics.Logzio package
Install the App.Metrics.Logzio package from the Package Manager Console:
Install-Package Logzio.App.Metrics
If you prefer to install the library manually, download the latest version from the NuGet Gallery.
Create MetricsBuilder
To create MetricsBuilder, copy and paste the following code into the function of the code that you need to export metrics from:
var metrics = new MetricsBuilder()
.Report.ToLogzioHttp("<<LISTENER-HOST>>:<<PORT>>", "<<METRICS-SHIPPING-TOKEN>>")
.Build();
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. For HTTPS communication, use port 8053. For HTTP communication, use port 8052.
Replace <<METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to.
Look up your Metrics token.
Create Scheduler
To create the Scheduler, copy and paste the following code into the same function of the code as the MetricsBuilder:
var scheduler = new AppMetricsTaskScheduler(
TimeSpan.FromSeconds(15),
async () => { await Task.WhenAll(metrics.ReportRunner.RunAllAsync()); });
scheduler.Start();
Add required metrics to your code
You can send the following metrics from your code:
You must have at least one of the above metrics in your code to use the Logzio.App.Metrics. For example, to add a counter metric to your code, copy and paste the following code block into the same function of the code as the MetricsBuilder and Scheduler.
var counter = new CounterOptions {Name = "my_counter", Tags = new MetricTags("test", "my_test")};
metrics.Measure.Counter.Increment(counter);
In the example above, the metric has a name ("my_counter"), a tag key ("test") and a tag value ("my_test"): These parameters are used to query data from this metric in your Logz.io dashboard.
Apdex
Apdex (Application Performance Index) allows you to monitor end-user satisfaction. For more information on this metric, refer to App Metrics documentation.
Counter
Counters are one of the most basic supported metrics types: They enable you to track how many times something has happened. For more information on this metric, refer to App Metrics documentation.
Gauge
A Gauge is an action that returns an instantaneous measurement for a value that abitrarily increases and decreases (for example, CPU usage). For more information on this metric, refer to App Metrics documentation.
Histogram
Histograms measure the statistical distribution of a set of values. For more information on this metric, refer to App Metrics documentation.
Meter
A Meter measures the rate at which an event occurs, along with the total count of the occurences. For more information on this metric, refer to App Metrics documentation.
Timer
A Timer is a combination of a histogram and a meter, which enables you to measure the duration of a type of event, the rate of its occurrence, and provide duration statistics. For more information on this metric, refer to App Metrics documentation.
Run your application
Run your application to start sending metrics to Logz.io.
Check Logz.io for your events
Give your events some time to get from your system to ours, and then open the Metrics dashboard.
Filter the metrics by labels
Once the metrics are in Logz.io, you can query the required metrics using labels. Each metric has the following labels:
App Metrics parameter name | Description | Logz.io parameter name |
---|---|---|
Name | The name of the metric. Required for each metric. | Metric name if not stated otherwise |
MeasurementUnit | The unit you use to measure. By default it is None . | unit |
Context | The context which the metric belong to. By default it is Application . | context |
Tags | Pairs of key and value of the metric. It is not required to have tags for a metric. | Tags keys |
Some of the metrics have custom labels, as described below.
Meter
App Metrics label name | Logz.io label name |
---|---|
RateUnit | rate_unit |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_meter_name]]_count |
One Min Rate | [[your_meter_name]]_one_min_rate |
Five Min Rate | [[your_meter_name]]_five_min_rate |
Fifteen Min Rate | [[your_meter_name]]_fifteen_min_rate |
Mean Rate | [[your_meter_name]]_mean_rate |
Replace [[your_meter_name]] with the name that you assigned to the meter metric.
Histogram
App Metrics label name | Logz.io label name |
---|---|
Last User Value | last_user_value |
Max User Value | max_user_value |
Min User Value | min_user_value |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_histogram_name]]_count |
Sum | [[your_histogram_name]]_sum |
Last Value | [[your_histogram_name]]_lastValue |
Max | [[your_histogram_name]]_max |
Mean | [[your_histogram_name]]_mean |
Median | [[your_histogram_name]]_median |
Min | [[your_histogram_name]]_min |
Percentile 75 | [[your_histogram_name]]_percentile75 |
Percentile 95 | [[your_histogram_name]]_percentile95 |
Percentile 98 | [[your_histogram_name]]_percentile98 |
Percentile 99 | [[your_histogram_name]]_percentile99 |
Percentile 999 | [[your_histogram_name]]_percentile999 |
Sample Size | [[your_histogram_name]]_sample_size |
Std Dev | [[your_histogram_name]]_std_dev |
Replace [[your_histogram_name]] with the name that you assigned to the histogram metric.
Timer
App Metrics label name | Logz.io label name |
---|---|
Duration Unit | duration_unit |
Rate Unit | rate_unit |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_timer_name]]_count |
Histogram Active Session | [[your_timer_name]]_histogram_active_session |
Histogram Sum | [[your_timer_name]]_histogram_sum |
Histogram Last Value | [[your_timer_name]]_histogram_lastValue |
Histogram Max | [[your_timer_name]]_histogram_max |
Histogram Median | [[your_timer_name]]_histogram_median |
Histogram Percentile 75 | [[your_timer_name]]_histogram_percentile75 |
Histogram Percentile 95 | [[your_timer_name]]_histogram_percentile95 |
Histogram Percentile 98 | [[your_timer_name]]_histogram_percentile98 |
Histogram Percentile 99 | [[your_timer_name]]_histogram_percentile99 |
Histogram Percentile 999 | [[your_timer_name]]_histogram_percentile999 |
Histogram Sample Size | [[your_timer_name]]_histogram_sample_size |
Histogram Std Dev | [[your_timer_name]]_histogram_std_dev |
Rate One Min Rate | [[your_timer_name]]_rate_one_min_rate |
Rate Five Min Rate | [[your_timer_name]]_rate_five_min_rate |
Rate Fifteen Min Rate | [[your_timer_name]]_rate_fifteen_min_rate |
Rate Mean Rate | [[your_timer_name]]_rate_mean_rate |
Replace [[your_timer_name]] with the name that you assigned to the timer metric.
Apdex
App Metrics parameter name | Logz.io parameter name |
---|---|
Sample Size | [[your_apdex_name]]_sample_size |
Score | [[your_apdex_name]]_score |
Frustrating | [[your_apdex_name]]_frustrating |
Satisfied | [[your_apdex_name]]_satisfied |
Tolerating | [[your_apdex_name]]_tolerating |
Replace [[your_apdex_name]] with the name that you assigned to the timer metric.
For troubleshooting this solution, see our .NET core troubleshooting guide.
Send custom metrics to Logz.io with a Logz.io exporter defined by a config file
Before you begin, you'll need:
- An application in .NET Core 3.1 or higher
- An active Logz.io account
Install the App.Metrics.Logzio package
Install the App.Metrics.Logzio package from the Package Manager Console:
Install-Package Logzio.App.Metrics
If you prefer to install the library manually, download the latest version from NuGet Gallery.
Create MetricsBuilder
To create MetricsBuilder, copy and paste the following code into the function of the code that you need to export metrics from:
var metrics = new MetricsBuilder()
.Report.ToLogzioHttp("<<path_to_the_config_file>>")
.Build();
Add the following code to the configuration file:
<?xml version="1.0" encoding="utf-8"?>
<Configuration>
<LogzioConnection>
<Endpoint> <<LISTENER-HOST>> </Endpoint>
<Token> <<METRICS-SHIPPING-TOKEN>> </Token>
</LogzioConnection>
</Configuration>
Replace <<LISTENER-HOST>>
with the host for your region. For example, listener.logz.io
if your account is hosted on AWS US East, or listener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. For HTTPS communication, use port 8053. For HTTP communication, use port 8052.
Replace <<METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to.
Look up your Metrics token.
Create Scheduler
To create a Scheduler, copy and paste the following code into the same function of the code as the MetricsBuilder:
var scheduler = new AppMetricsTaskScheduler(
TimeSpan.FromSeconds(15),
async () => { await Task.WhenAll(metrics.ReportRunner.RunAllAsync()); });
scheduler.Start();
Add the required metrics to your code
You can send the following metrics from your code:
You must have at least one of the above metrics in your code to use the Logzio.App.Metrics. For example, to add a counter metric to your code, copy and paste the following code block into the same function of the code as the MetricsBuilder and Scheduler:
var counter = new CounterOptions {Name = "my_counter", Tags = new MetricTags("test", "my_test")};
metrics.Measure.Counter.Increment(counter);
In the example above, the metric has a name ("my_counter"), a tag key ("test") and a tag value ("my_test"). These parameters are used to query data from this metric in your Logz.io dashboard.
Apdex
Apdex (Application Performance Index) allows you to monitor end-user satisfaction. For more information on this metric, refer to App Metrics documentation.
Counter
Counters are one of the most basic supported metrics types: They enable you to track how many times something has happened. For more information on this metric, refer to App Metrics documentation.
Gauge
A Gauge is an action that returns an instantaneous measurement for a value that abitrarily increases and decreases (for example, CPU usage). For more information on this metric, refer to App Metrics documentation.
Histogram
Histograms measure the statistical distribution of a set of values. For more information on this metric, refer to App Metrics documentation.
Meter
A Meter measures the rate at which an event occurs, along with the total count of the occurences. For more information on this metric, refer to App Metrics documentation.
Timer
A Timer is a combination of a histogram and a meter, which enables you to measure the duration of a type of event, the rate of its occurrence, and provide duration statistics. For more information on this metric, refer to App Metrics documentation.
Run your application
Run your application to start sending metrics to Logz.io.
Check Logz.io for your events
Give your events some time to get from your system to ours, and then open Metrics dashboard.
Filter the metrics by labels
Once the metrics are in Logz.io, you can query the required metrics using labels. Each metric has the following labels:
App Metrics parameter name | Description | Logz.io parameter name |
---|---|---|
Name | The name of the metric. Required for each metric. | Metric name if not stated otherwise |
MeasurementUnit | The unit you use to measure. By default it is None . | unit |
Context | The context which the metric belong to. By default it is Application . | context |
Tags | Pairs of key and value of the metric. It is not required to have tags for a metric. | Tags keys |
Some of the metrics have custom labels as described below.
Meter
App Metrics label name | Logz.io label name |
---|---|
RateUnit | rate_unit |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_meter_name]]_count |
One Min Rate | [[your_meter_name]]_one_min_rate |
Five Min Rate | [[your_meter_name]]_five_min_rate |
Fifteen Min Rate | [[your_meter_name]]_fifteen_min_rate |
Mean Rate | [[your_meter_name]]_mean_rate |
Replace [[your_meter_name]] with the name that you assigned to the meter metric.
Histogram
App Metrics label name | Logz.io label name |
---|---|
Last User Value | last_user_value |
Max User Value | max_user_value |
Min User Value | min_user_value |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_histogram_name]]_count |
Sum | [[your_histogram_name]]_sum |
Last Value | [[your_histogram_name]]_lastValue |
Max | [[your_histogram_name]]_max |
Mean | [[your_histogram_name]]_mean |
Median | [[your_histogram_name]]_median |
Min | [[your_histogram_name]]_min |
Percentile 75 | [[your_histogram_name]]_percentile75 |
Percentile 95 | [[your_histogram_name]]_percentile95 |
Percentile 98 | [[your_histogram_name]]_percentile98 |
Percentile 99 | [[your_histogram_name]]_percentile99 |
Percentile 999 | [[your_histogram_name]]_percentile999 |
Sample Size | [[your_histogram_name]]_sample_size |
Std Dev | [[your_histogram_name]]_std_dev |
Replace [[your_histogram_name]] with the name that you assigned to the histogram metric.
Timer
App Metrics label name | Logz.io label name |
---|---|
Duration Unit | duration_unit |
Rate Unit | rate_unit |
App Metrics parameter name | Logz.io parameter name |
---|---|
Count | [[your_timer_name]]_count |
Histogram Active Session | [[your_timer_name]]_histogram_active_session |
Histogram Sum | [[your_timer_name]]_histogram_sum |
Histogram Last Value | [[your_timer_name]]_histogram_lastValue |
Histogram Max | [[your_timer_name]]_histogram_max |
Histogram Median | [[your_timer_name]]_histogram_median |
Histogram Percentile 75 | [[your_timer_name]]_histogram_percentile75 |
Histogram Percentile 95 | [[your_timer_name]]_histogram_percentile95 |
Histogram Percentile 98 | [[your_timer_name]]_histogram_percentile98 |
Histogram Percentile 99 | [[your_timer_name]]_histogram_percentile99 |
Histogram Percentile 999 | [[your_timer_name]]_histogram_percentile999 |
Histogram Sample Size | [[your_timer_name]]_histogram_sample_size |
Histogram Std Dev | [[your_timer_name]]_histogram_std_dev |
Rate One Min Rate | [[your_timer_name]]_rate_one_min_rate |
Rate Five Min Rate | [[your_timer_name]]_rate_five_min_rate |
Rate Fifteen Min Rate | [[your_timer_name]]_rate_fifteen_min_rate |
Rate Mean Rate | [[your_timer_name]]_rate_mean_rate |
Replace [[your_timer_name]] with the name that you assigned to the timer metric.
Apdex
App Metrics parameter name | Logz.io parameter name |
---|---|
Sample Size | [[your_apdex_name]]_sample_size |
Score | [[your_apdex_name]]_score |
Frustrating | [[your_apdex_name]]_frustrating |
Satisfied | [[your_apdex_name]]_satisfied |
Tolerating | [[your_apdex_name]]_tolerating |
Replace [[your_apdex_name]] with the name that you assigned to the apdex metric.
For troubleshooting this solution, see our .NET core troubleshooting guide.
Export using ToLogzioHttp exporter
You can configure MetricsBuilder to use ToLogzioHttp exporter, which allows you to export metrics via HTTP using additional export settings. To enable this exporter, add the following code block to define the MetricsBuilder:
var metrics = new MetricsBuilder()
.Report.ToLogzioHttp(options =>
{
options.Logzio.EndpointUri = new Uri("<<LISTENER-HOST>>:<<PORT>>");
options.Logzio.Token = "<<METRICS-SHIPPING-TOKEN>>";
options.FlushInterval = TimeSpan.FromSeconds(15);
options.Filter = new MetricsFilter().WhereType(MetricType.Counter);
options.HttpPolicy.BackoffPeriod = TimeSpan.FromSeconds(30);
options.HttpPolicy.FailuresBeforeBackoff = 5;
options.HttpPolicy.Timeout = TimeSpan.FromSeconds(10);
})
.Build();
- Replace
<<LISTENER-HOST>>
with the host for your region. For example,listener.logz.io
if your account is hosted on AWS US East, orlistener-nl.logz.io
if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. For HTTPS communication use port 8053. For HTTP communication use port 8052. - Replace
<<METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to.
Look up your Metrics token. FlushInterval
is the value in seconds defining delay between reporting metrics.Filter
is used to filter metrics for this reporter.HttpPolicy.BackoffPeriod
is the value in seconds defining theTimeSpan
to back-off when metrics are failing to report to the metrics ingress endpoint.HttpPolicy.FailuresBeforeBackoff
is the value defining the number of failures before backing-off when metrics are failing to report to the metrics ingress endpoint.HttpPolicy.Timeout
is the value in seconds defining the HTTP timeout duration when attempting to report metrics to the metrics ingress endpoint.
.NET Core runtime metrics
The runtime metrics are additional parameters that will be sent from your code. These parameters include:
- Garbage collection frequencies and timings by generation/type, pause timings and GC CPU consumption ratio.
- Heap size by generation.
- Bytes allocated by small/large object heap.
- JIT compilations and JIT CPU consumption ratio.
- Thread pool size, scheduling delays and reasons for growing/shrinking.
- Lock contention.
To enable collection of these metrics with default settings, add the following code block after the MetricsBuilder:
// metrics is the MetricsBuilder
IDisposable collector = DotNetRuntimeStatsBuilder.Default(metrics).StartCollecting();
To enable collection of these metrics with custom settings, add the following code block after the MetricsBuilder:
IDisposable collector = DotNetRuntimeStatsBuilder
.Customize()
.WithContentionStats()
.WithJitStats()
.WithThreadPoolSchedulingStats()
.WithThreadPoolStats()
.WithGcStats()
.StartCollecting(metrics); // metrics is the MetricsBuilder
Data collected from these metrics is found in Logz.io, under the Contexts labels process
and dotnet
.
Get current snapshot
The current snapshot creates a preview of the metrics in Logz.io format. To enable this option, add the following code block to the MetricsBuilder:
var metrics = new MetricsBuilder()
.OutputMetrics.AsLogzioCompressedProtobuf()
.Build();
var snapshot = metrics.Snapshot.Get();
using (var stream = new MemoryStream())
{
await metrics.DefaultOutputMetricsFormatter.WriteAsync(stream, snapshot);
// Your code here...
}
For troubleshooting this solution, see our .NET core troubleshooting guide.
Traces
- ASP.NET Core
- ASP.NET or .NET
- Troubleshooting
Deploy this integration to enable automatic instrumentation of your ASP.NET Core application using OpenTelemetry.
Architecture overview
This integration includes:
- Installing the OpenTelemetry ASP.NET Core instrumentation packages on your application host
- Installing the OpenTelemetry collector with Logz.io exporter
- Running your ASP.NET Core application in conjunction with the OpenTelemetry instrumentation
On deployment, the ASP.NET Core instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.
Setup auto-instrumentation for your locally hosted ASP.NET Core application and send traces to Logz.io
Before you begin, you'll need:
- An ASP.NET Core application without instrumentation
- An active account with Logz.io
- Port
4317
available on your host system - A name defined for your tracing service. You will need it to identify the traces in Logz.io.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Download instrumentation packages
Run the following command from the application directory:
dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code
Add the following configuration to the beginning of the Startup.cs file:
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
Add the following configuration to the Startup class:
public void ConfigureServices(IServiceCollection services)
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");
})
);
}
Download and configure OpenTelemetry collector
Create a dedicated directory on the host of your ASP.NET Core application and download the OpenTelemetry collector that is relevant to the operating system of your host.
After downloading the collector, create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Start the collector
Run the following command:
<path/to>/otelcol-contrib --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector.
Run the application
Run the application to generate traces.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
Setup auto-instrumentation for your ASP.NET Core application using Docker and send traces to Logz.io
This integration enables you to auto-instrument your ASP.NET Core application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.
Before you begin, you'll need:
- An ASP.NET Core application without instrumentation
- An active account with Logz.io
- Port
4317
available on your host system - A name defined for your tracing service
Download instrumentation packages
Run the following command from the application directory:
dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code
Add the following configuration to the beginning of the Startup.cs file:
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
Add the following configuration to the Startup class:
public void ConfigureServices(IServiceCollection services)
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");
})
);
}
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.78.0
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Tail Sampling
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
An example configuration file looks as follows:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Run the application
Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.
Run the application to generate traces.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
Deploy this integration to enable automatic instrumentation of your ASP.NET Core application using OpenTelemetry.
Architecture overview
This integration includes:
- Installing the OpenTelemetry ASP.NET Core instrumentation packages on your application host
- Installing the OpenTelemetry collector with Logz.io exporter
- Running your ASP.NET Core application in conjunction with the OpenTelemetry instrumentation
On deployment, the ASP.NET Core instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.
Setup auto-instrumentation for your locally hosted ASP.NET Core application and send traces to Logz.io
Before you begin, you'll need:
- An ASP.NET Core application without instrumentation
- An active account with Logz.io
- Port
4317
available on your host system - A name defined for your tracing service. You will need it to identify the traces in Logz.io.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Download instrumentation packages
Run the following command from the application directory:
dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code
Add the following configuration to the beginning of the Startup.cs file:
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
Add the following configuration to the Startup class:
public void ConfigureServices(IServiceCollection services)
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");
})
);
}
Download and configure OpenTelemetry collector
Create a dedicated directory on the host of your ASP.NET Core application and download the OpenTelemetry collector that is relevant to the operating system of your host.
After downloading the collector, create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Start the collector
Run the following command:
<path/to>/otelcol-contrib --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector.
Run the application
Run the application to generate traces.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
Setup auto-instrumentation for your ASP.NET Core application using Docker and send traces to Logz.io
This integration enables you to auto-instrument your ASP.NET Core application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.
Before you begin, you'll need:
- An ASP.NET Core application without instrumentation
- An active account with Logz.io
- Port
4317
available on your host system - A name defined for your tracing service
Download instrumentation packages
Run the following command from the application directory:
dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
dotnet add package OpenTelemetry.Instrumentation.AspNetCore
dotnet add package OpenTelemetry.Extensions.Hosting
Enable instrumentation in the code
Add the following configuration to the beginning of the Startup.cs file:
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
Add the following configuration to the Startup class:
public void ConfigureServices(IServiceCollection services)
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
services.AddOpenTelemetryTracing((builder) => builder
.AddAspNetCoreInstrumentation()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-app"))
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:4317");
})
);
}
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.78.0
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Tail Sampling
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
An example configuration file looks as follows:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
The tail_sampling
defines the decision to sample a trace after the completion of all the spans in a request. By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.
You can add more policy configurations to the processor. For more on this, refer to OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the spand latency - all traces slower than the threshold value will be filtered in. | 1000 |
sampling_percentage | Sampling percentage for the probabilistic policy. | 10 |
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.78.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.78.0
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <LOGZIO_ACCOUNT_REGION_CODE>
with the applicable region code.
Run the application
Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.
Run the application to generate traces.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
OpenTelemetry instrumentation
For troubleshooting the OpenTelemetry instrumentation, see our OpenTelemetry troubleshooting guide.