NestJS OpenTelemetry
Deploy this integration to enable automatic instrumentation of your NestJS application using OpenTelemetry.
Manual configuration
This integration includes:
- Installing the OpenTelemetry NestJS instrumentation packages on your application host
- Installing the OpenTelemetry collector with Logz.io exporter
- Running your NestJS application in conjunction with the OpenTelemetry instrumentation
On deployment, the NestJS instrumentation automatically captures spans from your application and forwards them to the collector, which exports the data to your Logz.io account.
Setup auto-instrumentation for your locally hosted NestJS application and send traces to Logz.io
Before you begin, you'll need:
- A NestJS application without instrumentation
- An active Logz.io account
- Port
4317
available on your host system - A name defined for your tracing service. You will need it to identify the traces in Logz.io.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Download instrumentation packages
Run the following command from the application directory:
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-collector
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In the directory of your application file, create a file named tracer.ts
with the following configuration:
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces"
});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"YOUR-SERVICE-NAME", // add the name of your service
}),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Refer your application to the tracer file
Add the following to the function of your application code:
require('<<PATH-TO-YOUR-FILE>>/tracer.ts');
Replace <<PATH-TO-YOUR-FILE>>
with the path to your tracer file.
Download and configure OpenTelemetry collector
Create a dedicated directory on the host of your NestJS application and download the OpenTelemetry collector that is relevant to the operating system of your host.
After downloading the collector, create a configuration file config.yaml
with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Start the collector
Run the following command from the directory of your application file:
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>
with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64
.
Run the application
Run the following command from the application directory to generate traces:
npm run start
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
Setup auto-instrumentation for your NestJS application using Docker and send traces to Logz.io
This integration enables you to auto-instrument your NestJS application and run a containerized OpenTelemetry collector to send your traces to Logz.io. If your application also runs in a Docker container, make sure that both the application and collector containers are on the same network.
Before you begin, you'll need:
- A NestJS application without instrumentation
- An active Logz.io account
- Port
4317
available on your host system - A name defined for your tracing service. You will need it to identify the traces in Logz.io.
Download instrumentation packages
Run the following command from the application directory:
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-collector
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In the directory of your application file, create a file named tracer.ts
with the following configuration:
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { CollectorTraceExporter } = require("@opentelemetry/exporter-collector");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new CollectorTraceExporter({});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"YOUR-SERVICE-NAME", // add the name of your service
}),
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: new opentelemetry.tracing.ConsoleSpanExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Refer your application to the tracer file
Add the following to the function of your application code:
require('<<PATH-TO-YOUR-FILE>>/tracer.ts');
Replace <<PATH-TO-YOUR-FILE>>
with the path to your tracer file.
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.111.0
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Tail Sampling
tail_sampling
defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.
Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 |
sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Here is an example configuration file:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.111.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.111.0
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the application
When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.
Run the following command from the application directory to generate traces:
npm run start
Check Logz.io for your traces
Give your traces some time to get from your system to ours, and then open Tracing.
Configuration using Helm
You can use a Helm chart to ship Traces to Logz.io via the OpenTelemetry collector. The Helm tool is used to manage packages of preconfigured Kubernetes resources that use charts.
logzio-k8s-telemetry allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.
This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts are logzio-helm.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Standard configuration
Deploy the Helm chart
Add logzio-helm
repo as follows:
helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code
helm install \
--set secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set secrets.TracesToken=<<TRACING-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set secrets.env_id=<<ENV_ID>> \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
<<LOGZIO_ACCOUNT_REGION_CODE>>
- Your Logz.io account region code. Available regions.
Define the service DNS
You'll need the following service DNS:
http://<<CHART-NAME>>-otel-collector.<<NAMESPACE>>.svc.cluster.local:<<PORT>>/
.
Replace <<CHART-NAME>>
with the relevant service you're using (logzio-k8s-telemetry
, logzio-monitoring
).
Replace <<NAMESPACE>>
with your Helm chart's deployment namespace (e.g., default or monitoring).
Replace <<PORT>>
with the port for your agent's protocol (Default is 4317).
If you're not sure what your cluster domain name is, you can run the following command to look it up:
kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
sh -c 'nslookup kubernetes.<<NAMESPACE>> | grep Name | sed "s/Name:\skubernetes.<<NAMESPACE>>//"'
It will deploy a small pod that extracts your cluster domain name from your Kubernetes environment. You can remove this pod after it has returned the cluster domain name.
Download instrumentation packages
Run the following command from the application directory:
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/tracing
npm install --save @opentelemetry/exporter-collector
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In the directory of your application file, create a file named tracer.ts
with the following configuration:
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/tracing");
const { CollectorTraceExporter } = require("@opentelemetry/exporter-collector");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new CollectorTraceExporter({});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"YOUR-SERVICE-NAME", // add the name of your service
}),
});
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: new opentelemetry.tracing.ConsoleSpanExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Refer your application to the tracer file
Add the following to the function of your application code:
require('<<PATH-TO-YOUR-FILE>>/tracer.ts');
Replace <<PATH-TO-YOUR-FILE>>
with the path to your tracer file.
Check Logz.io for your traces
Give your traces some time to get from your system to ours, then open Logz.io.
Customizing Helm chart parameters
Configure customization options
You can use the following options to update the Helm chart parameters:
Specify parameters using the
--set key=value[,key=value]
argument tohelm install
.Edit the
values.yaml
.Override default values with your own
my_values.yaml
and apply it in thehelm install
command.
If required, you can add the following optional parameters as environment variables:
Parameter | Description |
---|---|
secrets.SamplingLatency | Threshold for the span latency - all traces slower than the threshold value will be filtered in. Default 500. |
secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. |
Example
You can run the logzio-k8s-telemetry chart with your custom configuration file that takes precedence over the values.yaml
of the chart.
For example:
The collector will sample ALL traces where is some span with error with this example configuration.
baseCollectorConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]
helm install -f <PATH-TO>/my_values.yaml \
--set secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set secrets.TracesToken=<<TRACING-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set secrets.env_id=<<ENV_ID>> \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <PATH-TO>
with the path to your custom values.yaml
file.
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Uninstalling the Chart
The uninstall command is used to remove all the Kubernetes components associated with the chart and to delete the release.
To uninstall the logzio-k8s-telemetry
deployment, use the following command:
helm uninstall logzio-k8s-telemetry
Troubleshooting
If traces are not being sent despite instrumentation, follow these steps:
Collector not installed
The OpenTelemetry collector may not be installed on your system.
Suggested remedy
Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.
Collector path not configured
The collector may not have the correct endpoint configured for the receiver.
Suggested remedy
Verify the configuration file lists the following endpoints:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.
Traces not generated
The instrumentation code may be incorrect even if the collector and endpoints are properly configured.
Suggested remedy
- Check if the instrumentation can output traces to a console exporter.
- Use a web-hook to check if the traces are going to the output.
- Check the metrics endpoint
(http://<<COLLECTOR-HOST>>:8888/metrics)
to see spans received and sent. Replace<<COLLECTOR-HOST>>
with your collector's address.
If issues persist, refer to Logz.io's integrations hub and re-instrument the application.
Wrong exporter/protocol/endpoint
Incorrect exporter, protocol, or endpoint configuration.
The correct endpoints are:
receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"
Suggested remedy
- Activate
debug
logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"
Debug logs indicate the status code of the http/https post request.
If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.
A successful post request will log status code 200; failure reasons will also be logged.
Collector failure
The collector may fail to generate traces despite sending debug
logs.
Suggested remedy
On Linux and MacOS, view collector logs:
journalctl | grep otelcol
To only see errors:
journalctl | grep otelcol | grep Error
Otherwise, navigate to the following URL - http://localhost:8888/metrics
This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.
Exporter failure
The exporter configuration may be incorrect, causing trace export issues.
Suggested remedy
If you are unable to export traces to a destination, this may be caused by the following:
- There is a network configuration issue.
- The exporter configuration is incorrect.
- The destination is unavailable.
To investigate this issue:
- Make sure that the
exporters
andservice: pipelines
are configured correctly. - Check the collector logs and
zpages
for potential issues. - Check your network configuration, such as firewall, DNS, or proxy.
Metrics like the following can provide insights:
# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.
# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0
Receiver failure
The receiver may not be configured correctly.
Suggested remedy
If you are unable to receive data, this may be caused by the following:
- There is a network configuration issue.
- The receiver configuration is incorrect.
- The receiver is defined in the receivers section, but not enabled in any pipelines.
- The client configuration is incorrect.
Metrics for receivers can help diagnose issues:
# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.
# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34
# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.
# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0