Skip to main content



If your code is running inside Kubernetes the best practice will be to use our kubernetes integrations.


This shipper uses goleveldb and goqueue as a persistent storage implementation of a persistent queue, so the shipper backs up your logs to the local file system before sending them. Logs are queued in the buffer and 100% non-blocking. A background Go routine ships the logs every 5 seconds.

Set up the Golang API client

Before you begin, you'll need: Go 1.x or higher

Add the dependency to your project

Navigate to your project's folder in the command line, and run this command to install the dependency.

go get -u

Configure the client

Use the sample in the code block below as a starting point, and replace the sample with a configuration that matches your needs.

For a complete list of options, see the configuration parameters below the code block.👇

package main

import (

func main() {
// Replace these parameters with your configuration
l, err := logzio.New(
logzio.SetDrainDuration(time.Second * 5),
if err != nil {

// Because you're configuring directly in the code,
// you can paste the code sample here to send a test log.
// The code sample is below the parameters list. 👇


<<LOG-SHIPPING-TOKEN>>Your log shipping token directs the data securely to your Log Management account. The default token is auto-populated in the examples when you're logged into the app as an Admin. Manage your tokens.Required
SetUrlListener URL and port. Replace <<LISTENER-HOST>> with the host for your region. For example, if your account is hosted on AWS US East, or if hosted on Azure West Europe. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.Required (default:
SetDebugDebug flag.false
SetDrainDurationTime to wait between log draining attempts.5 * time.Second
SetTempDirectoryFilepath where the logs are buffered.--
SetCheckDiskSpaceTo enable SetDrainDiskThreshold, set to true. Otherwise, false.true
SetDrainDiskThresholdMaximum file system usage, in percent. Used only if SetCheckDiskSpace is set to true. If the file system storage exceeds this threshold, buffering stops and new logs are dropped. Buffering resumes if used space drops below the threshold.70.0

Code sample

msg := fmt.Sprintf("{\"%s\": \"%d\"}", "message", time.Now().UnixNano())
err = l.Send([]byte(msg))
if err != nil {

l.Stop() // Drains the log buffer


Install the SDK

Run the following command:

go get

Configure the exporter

Add the exporter definition to your application code:

import (
metricsExporter ""
controller ""
semconv ""
// ...

config := metricsExporter.Config {
LogzioMetricsListener: "<<LISTENER-HOST>>",
RemoteTimeout: 30 * time.Second,
PushInterval: 5 * time.Second,

Replace the placeholders in the code to match your specifics.

<<LISTENER-HOST>>The full Listener URL for for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic (example: For more details, see the regions page in docsRequired
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>The Prometheus Metrics account token. Find it under Settings > Manage accounts. Look up your Metrics account token.Required-
RemoteTimeoutThe timeout for requests to the remote write metrics listener endpoint.Required30 (seconds)
PushIntervalThe time interval for sending the metrics to (seconds)
QuantilesThe quantiles of the histograms.Optional[0.5, 0.9, 0.95, 0.99]
HistogramBoundariesThe histogram boundaries.Optional-

Add the exporter setup

Add the exporter setup definition to your application code:

// Use the `config` instance from last step.

cont, err := metricsExporter.InstallNewPipeline(
attribute.<<TYPE>>("<<LABEL_KEY>>", "<<LABEL_VALUE>>"),
if err != nil {
return err

Replace the placeholders in the code to match your specifics.

<<COLLECT_PERIOD>>The collect period time in seconds.
<<TYPE>>The available label value types according to the <<LABEL_VALUE>>.
<<LABEL_KEY>>The label key.
<<LABEL_VALUE>>The label value.

Set up the Metric Instruments Creator

Create Meter to create metric instruments:

// Use `cont` instance from last step.

ctx := context.Background()
defer func() {

meter := cont.Meter("<<INSTRUMENTATION_NAME>>")

func handleErr(err error) {
if err != nil {
panic(fmt.Errorf("encountered error: %v", err))

Replace <<INSTRUMENTATION_NAME>> with your instrumentation name.

Additionally, add the error handler:

func handleErr(err error) {
if err != nil {
panic(fmt.Errorf("encountered error: %v", err))

Add metric instruments

Add a required metric intrument to your code. Below are the available metric instruments and their code definition.

The exporter uses the simple selector's NewWithHistogramDistribution(). This means that the instruments are mapped to aggregations as shown in the table below.

CounterA synchronous Instrument which supports non-negative increments.Sum
Asynchronous CounterAn asynchronous Instrument which reports monotonically increasing value(s) when the instrument is being observed.Sum
HistogramA synchronous Instrument which can be used to report arbitrary values that are likely to be statistically meaningful. It is intended for statistics such as histograms, summaries, and percentile.Histogram
Asynchronous GaugeAn asynchronous Instrument which reports non-additive value(s) when the instrument is being observed.LastValue
UpDownCounterA synchronous Instrument which supports increments and decrements.Sum
Asynchronous UpDownCounterAn asynchronous Instrument which reports additive value(s) when the instrument is being observed.Sum


// Use `ctx` and `meter` from last steps.

// Create counter instruments
intCounter := metric.Must(meter).NewInt64Counter(
metric.WithDescription("int_counter description"),
floatCounter := metric.Must(meter).NewFloat64Counter(
metric.WithDescription("float_counter description"),

// Record values to the metric instruments and add labels
intCounter.Add(ctx, int64(10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
floatCounter.Add(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))

Asynchronous Counter

// Use `meter` from last steps.

// Create callbacks for your CounterObserver instruments
intCounterObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
floatCounterObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))

// Create CounterObserver instruments
_ = metric.Must(meter).NewInt64CounterObserver(
metric.WithDescription("int_counter_observer description"),
_ = metric.Must(meter).NewFloat64CounterObserver(
metric.WithDescription("float_counter_observer description"),


// Use `ctx` and `meter` from last steps.

// Create Histogram instruments
intHistogram := metric.Must(meter).NewInt64Histogram(
metric.WithDescription("int_histogram description"),
floatHistogram := metric.Must(meter).NewFloat64Histogram(
metric.WithDescription("float_histogram description"),

// Record values to the metric instruments and add labels
intHistogram.Record(ctx, int(10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
floatHistogram.Record(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))

Asynchronous Gauge

// Use `meter` from last steps.

// Create callbacks for your GaugeObserver instruments
intGaugeObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
floatGaugeObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))

// Create GaugeObserver instruments
_ = metric.Must(meter).NewInt64GaugeObserver(
metric.WithDescription("int_gauge_observer description"),
_ = metric.Must(meter).NewFloat64GaugeObserver(
metric.WithDescription("float_gauge_observer description"),


// Use `ctx` and `meter` from last steps.

// Create UpDownCounter instruments
intUpDownCounter := metric.Must(meter).NewInt64UpDownCounter(
metric.WithDescription("int_up_down_counter description"),
floatUpDownCounter := metric.Must(meter).NewFloat64UpDownCounter(
metric.WithDescription("float_up_down_counter description"),

// Record values to the metric instruments and add labels
intUpDownCounter.Add(ctx, int64(-10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
floatUpDownCounter.Add(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))

Asynchronous UpDownCounter

// Use `meter` from last steps.

// Create callback for your UpDownCounterObserver instruments
intUpDownCounterObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(-10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
floatUpDownCounterObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))

// Create UpDownCounterObserver instruments
_ = metric.Must(meter).NewInt64UpDownCounterObserver(
metric.WithDescription("int_up_down_counter_observer description"),
_ = metric.Must(meter).NewFloat64UpDownCounterObserver(
metric.WithDescription("float_up_down_counter_observer description"),

Check for your metrics

Give your data some time to get from your system to ours, then log in to your Metrics account, and open the Metrics tab.


Deploy this integration to enable instrumentation of your Go application using OpenTelemetry.

Manual configuration

This integration includes:

  • Installing the OpenTelemetry Go instrumentation packages on your application host
  • Installing the OpenTelemetry collector with exporter
  • Running your Go application in conjunction with the OpenTelemetry instrumentation

On deployment, the Go instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your account.

Setup instrumentation for your locally hosted Go application and send traces to

Before you begin, you'll need:

  • A Go application without instrumentation
  • An active account with
  • Port 4318 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download the general instrumentation packages

These packages are required to enable instrumentation for your code regardless of the type of application that you need to instrument.

To download these packages, run the following command from the application directory:

go get -u
go get -u
go get -u
go get -u
go get -u
go get -u
go get -u
go get -u
go get -u
go get -u
go get -u

We recommend sending OTLP traces using HTTP. This is why we import the otlptracehttp package.

Download the application specific instrumentation packages

Depending on the type of your application, you need to download instrumentation packages specific to your application. For example, if your application is a HTTP server, you will need the package. The full list of all available packages can be found in the OpenTelemetry contrib directory.

The example below is given for a HTTP server application:

go get -u
Add the instrumentation to the import function

Add all the packages downloaded in the previous steps to the import function of your application.

The example below is given for a HTTP server application:

import (


sdktrace ""
semconv ""
Add the initProvider function

Add the initProvider function to the application code as follows:

func initProvider() func() {
ctx := context.Background()

res, err := resource.New(ctx,
handleErr(err, "failed to create resource")

traceExporter, err := otlptracehttp.New(ctx,
handleErr(err, "failed to create trace exporter")

bsp := sdktrace.NewBatchSpanProcessor(traceExporter)
tracerProvider := sdktrace.NewTracerProvider(
return func() {
handleErr(tracerProvider.Shutdown(ctx), "failed to shutdown TracerProvider")
Instrument the code in the main function

In the main function of your application, add the following code:

    shutdown := initProvider()
defer shutdown()

After this, you need to declare the instrumentation according to your application. The example below is given for a HTTP server application. The HTTP handler instructs the tracer to create spans on each request.

uk := attribute.Key("username")

helloHandler := func(w http.ResponseWriter, req *http.Request) {
ctx := req.Context()
span := trace.SpanFromContext(ctx)
bag := baggage.FromContext(ctx)
span.AddEvent("handling this...", trace.WithAttributes(uk.String(bag.Member("username").Value())))

_, _ = io.WriteString(w, "Hello, world!\n")

otelHandler := otelhttp.NewHandler(http.HandlerFunc(helloHandler), "Hello")

http.Handle("/hello", otelHandler)
err := http.ListenAndServe(":7777", nil)
if err != nil {
func handleErr(err error, message string) {
if err != nil {
log.Fatalf("%s: %v", message, err)
Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your Go application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

endpoint: ""
endpoint: ""

account_token: "<<TRACING-SHIPPING-TOKEN>>"


name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}

endpoint: :1777
endpoint: :55679

extensions: [health_check, pprof, zpages]
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

Start the collector

Run the following command from the directory of your application file:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.
Run the application

Run the application to generate traces:

Check for your traces

Give your traces some time to get from your system to ours, and then open Tracing.