Deploy this integration to enable automatic instrumentation of your Node.js application using OpenTelemetry.

Architecture overview

This integration includes:

  • Installing the OpenTelemetry Node.js instrumentation packages on your application host
  • Installing the OpenTelemetry collector with Logz.io exporter
  • Running your Node.js application in conjunction with the OpenTelemetry instrumentation

On deployment, the Node.js instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.

Setup auto-instrumentation for your locally hosted Node.js application and send traces to Logz.io

Before you begin, you’ll need:

  • A Node.js application without instrumentation
  • An active account with Logz.io
  • Port 4318 available on your host system
  • A name defined for your tracing service
Download instrumentation packages

Run the following command from the application directory:

npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file

In the directory of your application file, create a file named tracer.js with the following configuration:

"use strict";

const {
    BasicTracerProvider,
    ConsoleSpanExporter,
    SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
    SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");

const opentelemetry = require("@opentelemetry/sdk-node");
const {
    getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");


const exporter = new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces"
});

const provider = new BasicTracerProvider({
    resource: new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]:
            "YOUR-SERVICE-NAME",
    }),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));

provider.register();
const sdk = new opentelemetry.NodeSDK({
    traceExporter: exporter,
    instrumentations: [getNodeAutoInstrumentations()],
});

sdk
    .start()
    .then(() => {
        console.log("Tracing initialized");
    })
    .catch((error) => console.log("Error initializing tracing", error));

process.on("SIGTERM", () => {
    sdk
        .shutdown()
        .then(() => console.log("Tracing terminated"))
        .catch((error) => console.log("Error terminating tracing", error))
        .finally(() => process.exit(0));
});

Download and configure OpenTelemetry collector

Create a dedicated directory on the host of your Node.js application and download the OpenTelemetry collector that is relevant to the operating system of your host.

After downloading the collector, create a configuration file config.yaml with the following parameters:

receivers:
  jaeger:
    protocols:
      thrift_compact:
        endpoint: "0.0.0.0:6831"
      thrift_binary:
        endpoint: "0.0.0.0:6832"
      grpc:
        endpoint: "0.0.0.0:14250"
      thrift_http:
        endpoint: "0.0.0.0:14268"
  opencensus:
    endpoint: "0.0.0.0:55678"
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:4317"
      http:
        endpoint: "0.0.0.0:4318"
  zipkin:
    endpoint: "0.0.0.0:9411"


exporters:
  logzio:
    account_token: "<<TRACING-SHIPPING-TOKEN>>"
    #region: "<<LOGZIO_ACCOUNT_REGION_CODE>>" - (Optional): Your logz.io account region code. Defaults to "us". Required only if your logz.io region is different than US East. https://docs.logz.io/user-guide/accounts/account-region.html#available-regions

  logging:

processors:
  batch:

extensions:
  pprof:
    endpoint: :1777
  zpages:
    endpoint: :55679
  health_check:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [opencensus, jaeger, zipkin, otlp]
      processors: [batch]
      exporters: [logging, logzio]

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

If your account is hosted in any region other than us, replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

If your account is hosted in us, remove region: line from the config.

Start the collector

Run the following command from the directory of your application file:

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.
Run the application

Run the application to generate traces:

node --require './tracer.js' <YOUR-APPLICATION-FILE-NAME>.js
Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Setup auto-instrumentation for your Node.js application using Docker and send traces to Logz.io

This integration enables you to auto-instrument your Node.js application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.

Before you begin, you’ll need:

  • A Node.js application without instrumentation
  • An active account with Logz.io
  • Port 4317 available on your host system
  • A name defined for your tracing service
Download instrumentation packages

Run the following command from the application directory:

npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file

In the directory of your application file, create a file named tracer.js with the following configuration:

"use strict";

const {
    BasicTracerProvider,
    ConsoleSpanExporter,
    SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
    SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");

const opentelemetry = require("@opentelemetry/sdk-node");
const {
    getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");


const exporter = new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces"
});

const provider = new BasicTracerProvider({
    resource: new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]:
            "YOUR-SERVICE-NAME",
    }),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));

provider.register();
const sdk = new opentelemetry.NodeSDK({
    traceExporter: exporter,
    instrumentations: [getNodeAutoInstrumentations()],
});

sdk
    .start()
    .then(() => {
        console.log("Tracing initialized");
    })
    .catch((error) => console.log("Error initializing tracing", error));

process.on("SIGTERM", () => {
    sdk
        .shutdown()
        .then(() => console.log("Tracing terminated"))
        .catch((error) => console.log("Error terminating tracing", error))
        .finally(() => process.exit(0));
});

Pull the Docker image for the OpenTelemetry collector
docker pull logzio/otel-collector-traces

This integration only works with an otel-contrib image. The logzio/otel-collector-traces image is based on otel-contrib.

Run the container

When running on a Linux host, use the --network host flag to publish the collector ports:

docker run \
-e LOGZIO_REGION=<<LOGZIO_ACCOUNT_REGION_CODE>> \
-e LOGZIO_TRACES_TOKEN=<<TRACING-SHIPPING-TOKEN>> \
--network host \
logzio/otel-collector-traces

When running on MacOS or Windows hosts, publish the ports using the -p flag:

docker run \
-e LOGZIO_REGION=<<LOGZIO_ACCOUNT_REGION_CODE>> \
-e LOGZIO_TRACES_TOKEN=<<TRACING-SHIPPING-TOKEN>> \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 4318:4318 \
-p 55681:55681 \
logzio/otel-collector-traces

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

If your account is hosted in any region other than us, replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

If your account is hosted in us, remove region: line from the config.

Run the application

Normally, when you run the OTEL collector in a Docker container, your application will run in separate containers on the same host. In this case, you need to make sure that all your application containers share the same network as the OTEL collector container. One way to achieve this, is to run all containers, including the OTEL collector, with a Docker-compose configuration. Docker-compose automatically makes sure that all containers with the same configuration are sharing the same network.

Run the application to generate traces:

node --require './tracer.js' <YOUR-APPLICATION-FILE-NAME>.js
Check Logz.io for your traces

Give your traces some time to get from your system to ours, and then open Tracing.

Overview

You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of pre-configured Kubernetes resources that use charts.

logzio-otel-traces allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.

This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts are logzio-helm.

Standard configuration

Deploy the Helm chart

Add logzio-helm repo as follows:

helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code
helm install  \
--set config.exporters.logzio.region=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set config.exporters.logzio.account_token=<<TRACING-SHIPPING-TOKEN>> \
logzio-otel-traces logzio-helm/logzio-otel-traces

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

If your account is hosted in any region other than us, replace <LOGZIO_ACCOUNT_REGION_CODE> with the applicable region code.

If your account is hosted in us, remove region: line from the config. <<LOGZIO_ACCOUNT_REGION_CODE>> - (Optional): Your logz.io account region code. Defaults to “us”. Required only if your logz.io region is different than US East.

Define the logzio-otel-traces dns name

In most cases, the service name will be logzio-otel-traces.default.svc.cluster.local, where default is the namespace where you deployed the helm chart and svc.cluster.name is your cluster domain name.

If you are not sure what your cluster domain name is, you can run the following command to look it up:

kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'

It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.

Download instrumentation packages

Run the following command from the application directory:

npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file

In the directory of your application file, create a file named tracer.js with the following configuration:

"use strict";

const {
    BasicTracerProvider,
    ConsoleSpanExporter,
    SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
    SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");

const opentelemetry = require("@opentelemetry/sdk-node");
const {
    getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");


const exporter = new OTLPTraceExporter({
    url: "http://localhost:4318/v1/traces"
});

const provider = new BasicTracerProvider({
    resource: new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]:
            "YOUR-SERVICE-NAME",
    }),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));

provider.register();
const sdk = new opentelemetry.NodeSDK({
    traceExporter: exporter,
    instrumentations: [getNodeAutoInstrumentations()],
});

sdk
    .start()
    .then(() => {
        console.log("Tracing initialized");
    })
    .catch((error) => console.log("Error initializing tracing", error));

process.on("SIGTERM", () => {
    sdk
        .shutdown()
        .then(() => console.log("Tracing terminated"))
        .catch((error) => console.log("Error terminating tracing", error))
        .finally(() => process.exit(0));
});

Check Logz.io for your traces

Give your traces some time to get from your system to ours, then open Logz.io.

Customizing Helm chart parameters

Configure customization options

You can use the following options to update the Helm chart parameters:

  • Specify parameters using the --set key=value[,key=value] argument to helm install.

  • Edit the values.yaml.

  • Overide default values with your own my_values.yaml and apply it in the helm install command.

Example
helm install logzio-otel-traces logzio-helm/logzio-otel-traces -f my_values.yaml 

Uninstalling the Chart

The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.

To uninstall the logzio-otel-traces deployment, use the following command:

helm uninstall logzio-otel-traces