Skip to main content

GO

tip

If your code is running inside Kubernetes the best practice will be to use our kubernetes integrations.

Logs

This shipper uses goleveldb and goqueue as a persistent storage implementation of a persistent queue, so the shipper backs up your logs to the local file system before sending them. Logs are queued in the buffer and 100% non-blocking. A background Go routine ships the logs every 5 seconds.

Set up the Logz.io Golang API client

Before you begin, you'll need: Go 1.x or higher

Add the dependency to your project

Navigate to your project's folder in the command line, and run this command to install the dependency.

go get -u github.com/logzio/logzio-go

Configure the client

Use the sample in the code block below as a starting point, and replace the sample with a configuration that matches your needs.

For a complete list of options, see the configuration parameters below the code block.👇

package main

import (
"fmt"
"os"
"time"
"github.com/logzio/logzio-go"
)

func main() {
// Replace these parameters with your configuration
l, err := logzio.New(
"<<LOG-SHIPPING-TOKEN>>",
logzio.SetDebug(os.Stderr),
logzio.SetUrl("https://<<LISTENER-HOST>>:8071"),
logzio.SetDrainDuration(time.Second * 5),
logzio.SetTempDirectory("myQueue"),
logzio.SetDrainDiskThreshold(99),
)
if err != nil {
panic(err)
}

// Because you're configuring directly in the code,
// you can paste the code sample here to send a test log.
//
// The code sample is below the parameters list. 👇
}

Parameters

ParameterDescriptionRequired/Default
<<LOG-SHIPPING-TOKEN>>Your Logz.io log shipping token directs the data securely to your Logz.io Log Management account. The default token is auto-populated in the examples when you're logged into the Logz.io app as an Admin. Manage your tokens.Required
SetUrlListener URL and port. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.Required (default: https://listener.logz.io:8071)
SetDebugDebug flag.false
SetDrainDurationTime to wait between log draining attempts.5 * time.Second
SetTempDirectoryFilepath where the logs are buffered.--
SetCheckDiskSpaceTo enable SetDrainDiskThreshold, set to true. Otherwise, false.true
SetDrainDiskThresholdMaximum file system usage, in percent. Used only if SetCheckDiskSpace is set to true. If the file system storage exceeds this threshold, buffering stops and new logs are dropped. Buffering resumes if used space drops below the threshold.70.0

Code sample

msg := fmt.Sprintf("{\"%s\": \"%d\"}", "message", time.Now().UnixNano())
err = l.Send([]byte(msg))
if err != nil {
panic(err)
}

l.Stop() // Drains the log buffer

Metrics

Install the SDK

Run the following command:

go get github.com/logzio/go-metrics-sdk

Configure the exporter

Add the exporter definition to your application code:

import (
metricsExporter "github.com/logzio/go-metrics-sdk"
controller "go.opentelemetry.io/otel/sdk/metric/controller/basic"
semconv "go.opentelemetry.io/otel/semconv/v1.7.0"
// ...
)

config := metricsExporter.Config {
LogzioMetricsListener: "<<LISTENER-HOST>>",
LogzioMetricsToken: "<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>",
RemoteTimeout: 30 * time.Second,
PushInterval: 5 * time.Second,
}

Replace the placeholders in the code to match your specifics.

ParameterDescriptionRequiredDefault
<<LISTENER-HOST>>The full Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic (example: https://listener.logz.io:8053). For more details, see the regions page in logz.io docsRequiredhttps://listener.logz.io:8053
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>The Logz.io Prometheus Metrics account token. Find it under Settings > Manage accounts. Look up your Metrics account token.Required-
RemoteTimeoutThe timeout for requests to the remote write Logz.io metrics listener endpoint.Required30 (seconds)
PushIntervalThe time interval for sending the metrics to Logz.io.Required10 (seconds)
QuantilesThe quantiles of the histograms.Optional[0.5, 0.9, 0.95, 0.99]
HistogramBoundariesThe histogram boundaries.Optional-

Add the exporter setup

Add the exporter setup definition to your application code:

// Use the `config` instance from last step.

cont, err := metricsExporter.InstallNewPipeline(
config,
controller.WithCollectPeriod(<<COLLECT_PERIOD>>*time.Second),
controller.WithResource(
resource.NewWithAttributes(
semconv.SchemaURL,
attribute.<<TYPE>>("<<LABEL_KEY>>", "<<LABEL_VALUE>>"),
),
),
)
if err != nil {
return err
}

Replace the placeholders in the code to match your specifics.

ParameterDescription
<<COLLECT_PERIOD>>The collect period time in seconds.
<<TYPE>>The available label value types according to the <<LABEL_VALUE>>.
<<LABEL_KEY>>The label key.
<<LABEL_VALUE>>The label value.

Set up the Metric Instruments Creator

Create Meter to create metric instruments:

// Use `cont` instance from last step.

ctx := context.Background()
defer func() {
handleErr(cont.Stop(ctx))
}()

meter := cont.Meter("<<INSTRUMENTATION_NAME>>")

func handleErr(err error) {
if err != nil {
panic(fmt.Errorf("encountered error: %v", err))
}
}

Replace <<INSTRUMENTATION_NAME>> with your instrumentation name.

Add metric instruments

Add a required metric intrument to your code. Below are the available metric instruments and their code definition.

The exporter uses the simple selector's NewWithHistogramDistribution(). This means that the instruments are mapped to aggregations as shown in the table below.

InstrumentBehaviorAggregation
CounterA synchronous Instrument which supports non-negative increments.Sum
Asynchronous CounterAn asynchronous Instrument which reports monotonically increasing value(s) when the instrument is being observed.Sum
HistogramA synchronous Instrument which can be used to report arbitrary values that are likely to be statistically meaningful. It is intended for statistics such as histograms, summaries, and percentile.Histogram
Asynchronous GaugeAn asynchronous Instrument which reports non-additive value(s) when the instrument is being observed.LastValue
UpDownCounterA synchronous Instrument which supports increments and decrements.Sum
Asynchronous UpDownCounterAn asynchronous Instrument which reports additive value(s) when the instrument is being observed.Sum

Counter

// Use `ctx` and `meter` from last steps.

// Create counter instruments
intCounter := metric.Must(meter).NewInt64Counter(
"go_metrics.int_counter",
metric.WithDescription("int_counter description"),
)
floatCounter := metric.Must(meter).NewFloat64Counter(
"go_metrics.float_counter",
metric.WithDescription("float_counter description"),
)

// Record values to the metric instruments and add labels
intCounter.Add(ctx, int64(10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
floatCounter.Add(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))

Asynchronous Counter

// Use `meter` from last steps.

// Create callbacks for your CounterObserver instruments
intCounterObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
}
floatCounterObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
}

// Create CounterObserver instruments
_ = metric.Must(meter).NewInt64CounterObserver(
"go_metrics.int_counter_observer",
intCounterObserverCallback,
metric.WithDescription("int_counter_observer description"),
)
_ = metric.Must(meter).NewFloat64CounterObserver(
"go_metrics.float_counter_observer",
floatCounterObserverCallback,
metric.WithDescription("float_counter_observer description"),
)

Histogram

// Use `ctx` and `meter` from last steps.

// Create Histogram instruments
intHistogram := metric.Must(meter).NewInt64Histogram(
"go_metrics.int_histogram",
metric.WithDescription("int_histogram description"),
)
floatHistogram := metric.Must(meter).NewFloat64Histogram(
"go_metrics.float_histogram",
metric.WithDescription("float_histogram description"),
)

// Record values to the metric instruments and add labels
intHistogram.Record(ctx, int(10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
floatHistogram.Record(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))

Asynchronous Gauge

// Use `meter` from last steps.

// Create callbacks for your GaugeObserver instruments
intGaugeObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
}
floatGaugeObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>>"))
}

// Create GaugeObserver instruments
_ = metric.Must(meter).NewInt64GaugeObserver(
"go_metrics.int_gauge_observer",
intGaugeObserverCallback,
metric.WithDescription("int_gauge_observer description"),
)
_ = metric.Must(meter).NewFloat64GaugeObserver(
"go_metrics.float_gauge_observer",
floatGaugeObserverCallback,
metric.WithDescription("float_gauge_observer description"),
)

UpDownCounter

// Use `ctx` and `meter` from last steps.

// Create UpDownCounter instruments
intUpDownCounter := metric.Must(meter).NewInt64UpDownCounter(
"go_metrics.int_up_down_counter",
metric.WithDescription("int_up_down_counter description"),
)
floatUpDownCounter := metric.Must(meter).NewFloat64UpDownCounter(
"go_metrics.float_up_down_counter",
metric.WithDescription("float_up_down_counter description"),
)

// Record values to the metric instruments and add labels
intUpDownCounter.Add(ctx, int64(-10), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
floatUpDownCounter.Add(ctx, float64(2.5), attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))

Asynchronous UpDownCounter

// Use `meter` from last steps.

// Create callback for your UpDownCounterObserver instruments
intUpDownCounterObserverCallback := func(_ context.Context, result metric.Int64ObserverResult) {
result.Observe(-10, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
}
floatUpDownCounterObserverCallback := func(_ context.Context, result metric.Float64ObserverResult) {
result.Observe(2.5, attribute.String("<<LABEL_KEY>>", "<<LABEL_VALUE>"))
}

// Create UpDownCounterObserver instruments
_ = metric.Must(meter).NewInt64UpDownCounterObserver(
"go_metrics.int_up_down_counter_observer",
intUpDownCounterObserverCallback,
metric.WithDescription("int_up_down_counter_observer description"),
)
_ = metric.Must(meter).NewFloat64UpDownCounterObserver(
"go_metrics.float_up_down_counter_observer",
floatUpDownCounterObserverCallback,
metric.WithDescription("float_up_down_counter_observer description"),
)

Check Logz.io for your metrics

Give your data some time to get from your system to ours, then log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Install the pre-built dashboard to enhance the observability of your metrics.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Traces

Auto Instrumentation

Deploy this integration to enable instrumentation of your Go application using OpenTelemetry.

Before you begin, you'll need:

  • A Go application without instrumentation
  • An active Logz.io account
  • Port 4318 available on your host system
  • A name defined for your tracing service. You will need it to identify the traces in Logz.io.

Install dependencies

go get -u go.opentelemetry.io/otel
go get -u go.opentelemetry.io/otel/propagation
go get -u go.opentelemetry.io/otel/sdk/resource
go get -u go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp
go get -u go.opentelemetry.io/otel/sdk/trace
go get -u go.opentelemetry.io/otel/semconv/v1.26.0

Setup Tracer Provider

Create a new file otel.go and place the below code in it:

note

Change <<SERVICE-NAME>> to your own service name.

import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
"log"

"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)

func newTraceProvider() (*sdktrace.TracerProvider, error) {
// Ensure default SDK resources and the required service name are set.
r, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("<<SERVICE-NAME>>"),
),
)
if err != nil {
return nil, err
}

exp, _ := otlptracehttp.New(context.Background(),
otlptracehttp.WithEndpoint("localhost:4318"),
otlptracehttp.WithInsecure())

return sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exp),
sdktrace.WithResource(r),
), nil
}

// setupOTelSDK bootstraps the OpenTelemetry logging pipeline.
func setupOTelSDK(ctx context.Context) (shutdown func(context.Context) error, err error) {
// setup trace provider
traceProvider, err := newTraceProvider()
if err != nil {
return nil, err
}

otel.SetTracerProvider(traceProvider)
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{}, propagation.Baggage{}))

// Return a shutdown function
shutdown = func(ctx context.Context) error {
err := traceProvider.Shutdown(ctx)
if err != nil {
log.Printf("Error during tracer provider shutdown: %v", err)
}
return err
}

return shutdown, nil
}

Call Tracer Provider from main function

Make sure the application will setup the instrumentation by calling setupOTelSDK() that was created in the previous step from your main function.

// Set up OpenTelemetry.
otelShutdown, err := setupOTelSDK(ctx)
if err != nil {
return
}
// Handle shutdown properly so nothing leaks.
defer func() {
err = errors.Join(err, otelShutdown(context.Background()))
}()

Setup OpenTelemetry Collector

Create a dedicated directory on the host of your Go application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Start OpenTelemetry Collector

Run the following command from the directory of your application file:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.
Check Logz.io for your traces

Run the application after building it with the new Instrumentation. Give your traces some time to get from your system to ours, and then open Tracing.

Manual Instrumentation

If you're using specific libararies and want to be specific or precise with instrumentation, you can opt to instrument your code manually.