Skip to main content

Node.js

Logs

logzio-nodejs collects log messages in an array and sends them asynchronously when it reaches 100 messages or 10 seconds. It retries on connection reset or timeout every 2 seconds, doubling the interval up to 3 times. It operates asynchronously, ensuring it doesn't block other messages. By default, errors are logged to the console, but this can be customized with a callback function.

Configure logzio-nodejs

Install the dependency:

npm install logzio-nodejs

Use the sample configuration and edit it according to your needs:

// Replace these parameters with your configuration
var logger = require('logzio-nodejs').createLogger({
token: '<<LOG-SHIPPING-TOKEN>>',
protocol: 'https',
host: '<<LISTENER-HOST>>',
port: '8071',
type: 'YourLogType'
});

Parameters

ParameterDescriptionRequired/Default
tokenYour Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.Required
protocolhttp or https. The value of this parameter affects the default of the port parameter.http
hostUse the listener URL specific to the region where your Logz.io account is hosted. Click to look up your listener URL. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.listener.logz.io
portDestination port. The default port depends on the protocol parameter: 8070 (for HTTP) or 8071 (for HTTPS)8070 / 8071
typeDeclare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can't contain spaces.nodejs
sendIntervalMsTime to wait between retry attempts, in milliseconds.2000 (2 seconds)
bufferSizeMaximum number of messages the logger accumulates before sending them all as a bulk.100
numberOfRetriesMaximum number of retry attempts.3
debugSet to true to print debug messages to the console.false
callbackA callback function to call when the logger encounters an unrecoverable error. The function API is function(err), where err is the Error object.--
timeoutRead/write/connection timeout, in milliseconds.--
extraFieldsJSON format. Adds your custom fields to each log. Format: extraFields : { field_1: "val_1", field_2: "val_2" , ... }--
setUserAgentSet to false to send logs without the user-agent field in the request header.true

Code example:

You can send log lines as a raw string or an object. For consistent and reliable parsing, we recommend sending them as objects:

var obj = {
message: 'Some log message',
param1: 'val1',
param2: 'val2',
tags : ['tag1']
};
logger.log(obj);

To send a raw string:

logger.log('This is a log message');

For serverless environments, such as AWS Lambda, Azure Functions, or Google Cloud Functions, include this line at the end of the run:

logger.sendAndClose();

Metrics

These examples use the OpenTelemetry JS SDK and are based on the OpenTelemetry exporter collector proto.

Before you begin, you'll need:

Node 14 or higher.

note

We recommend using this integration with the Logz.io Metrics backend, though it is compatible with any backend that supports the prometheusremotewrite format.

Install the SDK package

npm install logzio-nodejs-metrics-sdk@0.5.0

Initialize the exporter and meter provider

const MeterProvider = require('@opentelemetry/sdk-metrics-base');
const sdk = require('logzio-nodejs-metrics-sdk');

const collectorOptions = {
url: 'https://<<LISTENER-HOST>>:8053',
headers: {
"Authorization":"Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"
}
};
// Initialize the exporter
const metricExporter = new sdk.RemoteWriteExporter(collectorOptions);

// Initialize the meter provider
const meter = new MeterProvider({
readers: [
new PeriodicExportingMetricReader(
{
exporter: metricExporter,
exportIntervalMillis: 1000
})
],
}).getMeter('example-exporter');

Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>):

  • Replace <<LISTENER-HOST>> with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic.
  • Replace <<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account you want to ship to. Look up your Metrics token.

Add required metrics to the code

You can use the following metrics:

NameBehavior
CounterMetric value can only increase or reset to 0, calculated per counter.Add(context,value,labels) request.
UpDownCounterMetric value can arbitrarily increment or decrement, calculated per updowncounter.Add(context,value,labels) request.
HistogramMetric values are captured by the histogram.Record(context,value,labels) function and calculated per request.

For details on these metrics, refer to the OpenTelemetry documentation.

Insert the following code after initialization to add a metric:

Counter

const requestCounter = meter.createCounter('Counter', {
description: 'Example of a Counter',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some value
requestCounter.add(1,labels);
// In logzio Metrics you will see the following metric:
// Counter_total{environment: 'prod'} 1.0

UpDownCounter

const upDownCounter = meter.createUpDownCounter('UpDownCounter', {
description: 'Example of an UpDownCounter',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some values
upDownCounter.add(5,labels);
upDownCounter.add(-1,labels);
// In logzio you will see the following metric:
// UpDownCounter{environment: 'prod'} 4.0

Histogram:

const histogram = meter.createHistogram('test_histogram', {
description: 'Example of a histogram',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some values
histogram.record(30,labels);
histogram.record(20,labels);
// In logzio you will see the following metrics:
// test_histogram_sum{environment: 'prod'} 50.0
// test_histogram_count{environment: 'prod'} 2.0
// test_histogram_avg{environment: 'prod'} 25.0

View your metrics

Run your application to start sending metrics to Logz.io.

Allow some time for data ingestion, then check your Metrics dashboard.

Install the pre-built dashboard for enhanced observability.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Traces

Auto-instrument Node.js and send Traces to Logz.io

Before you begin, you'll need:

  • A Node.js application without instrumentation.
  • An active Logz.io account.
  • Port 4318 available on your host system.
  • A name for your tracing service to identify traces in Logz.io.
note

This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.

Download instrumentation packages

npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/sdk-trace-base
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node

Create a tracer file

In your application's directory, create a file named tracer.js with the following configuration.

Important

Replace <<YOUR-SERVICE-NAME>> with your service name.

"use strict";

const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/sdk-trace-base");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");

const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");


const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces"
});

const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"<<YOUR-SERVICE-NAME>>",
}),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));

provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});

sdk
.start()

console.log("Tracing initialized");


process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});

Download and configure the OpenTelemetry collector

Create a directory on your Node.js host, download the appropriate OpenTelemetry collector for your OS, and create a config.yaml file with the following parameters:

receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"

exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces

processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]

extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:

service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info

Replace <<TRACING-SHIPPING-TOKEN>> with the token of the account you want to ship to.

Replace <<LOGZIO_ACCOUNT_REGION_CODE>> with the applicable region code.

Start the collector

<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
  • Replace <path/to> with the path to the directory where you downloaded the collector.
  • Replace <VERSION-NAME> with the version name of the collector applicable to your system, e.g. otelcontribcol_darwin_amd64.

Run the application

Run this command to generate traces:

node --require './tracer.js' <YOUR-APPLICATION-FILE-NAME>.js

View your traces

Give your traces some time to ingest, and then open your Tracing account.

Troubleshooting

If traces are not being sent despite instrumentation, follow these steps:

Collector not installed

The OpenTelemetry collector may not be installed on your system.

Suggested remedy

Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.

Collector path not configured

The collector may not have the correct endpoint configured for the receiver.

Suggested remedy

  1. Verify the configuration file lists the following endpoints:

    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: "0.0.0.0:4317"
    http:
    endpoint: "0.0.0.0:4318"
  2. Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.

Traces not generated

The instrumentation code may be incorrect even if the collector and endpoints are properly configured.

Suggested remedy

  1. Check if the instrumentation can output traces to a console exporter.
  2. Use a web-hook to check if the traces are going to the output.
  3. Check the metrics endpoint (http://<<COLLECTOR-HOST>>:8888/metrics) to see spans received and sent. Replace <<COLLECTOR-HOST>> with your collector's address.

If issues persist, refer to Logz.io's integrations hub and re-instrument the application.

Wrong exporter/protocol/endpoint

Incorrect exporter, protocol, or endpoint configuration.

The correct endpoints are:

   receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"

Suggested remedy

  1. Activate debug logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"

Debug logs indicate the status code of the http/https post request.

If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.

A successful post request will log status code 200; failure reasons will also be logged.

Collector failure

The collector may fail to generate traces despite sending debug logs.

Suggested remedy

  1. On Linux and MacOS, view collector logs:

    journalctl | grep otelcol

    To only see errors:

    journalctl | grep otelcol | grep Error
  2. Otherwise, navigate to the following URL - http://localhost:8888/metrics

This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.

Exporter failure

The exporter configuration may be incorrect, causing trace export issues.

Suggested remedy

If you are unable to export traces to a destination, this may be caused by the following:

  • There is a network configuration issue.
  • The exporter configuration is incorrect.
  • The destination is unavailable.

To investigate this issue:

  1. Make sure that the exporters and service: pipelines are configured correctly.
  2. Check the collector logs and zpages for potential issues.
  3. Check your network configuration, such as firewall, DNS, or proxy.

Metrics like the following can provide insights:

# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.

# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0

Receiver failure

The receiver may not be configured correctly.

Suggested remedy

If you are unable to receive data, this may be caused by the following:

  • There is a network configuration issue.
  • The receiver configuration is incorrect.
  • The receiver is defined in the receivers section, but not enabled in any pipelines.
  • The client configuration is incorrect.

Metrics for receivers can help diagnose issues:

# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.

# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34


# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.

# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0