Node.js
Logs
- logzio-nodejs
- winston-logzio
- winston-logzio Typescript
- OpenTelemetry JavaScript
- OpenTelemetry Typescript
logzio-nodejs collects log messages in an array and sends them asynchronously when it reaches 100 messages or 10 seconds. It retries on connection reset or timeout every 2 seconds, doubling the interval up to 3 times. It operates asynchronously, ensuring it doesn't block other messages. By default, errors are logged to the console, but this can be customized with a callback function.
Configure logzio-nodejs
Install the dependency:
npm install logzio-nodejs
Use the sample configuration and edit it according to your needs:
// Replace these parameters with your configuration
var logger = require('logzio-nodejs').createLogger({
token: '<<LOG-SHIPPING-TOKEN>>',
protocol: 'https',
host: '<<LISTENER-HOST>>',
port: '8071',
type: 'YourLogType'
});
Parameters
Parameter | Description | Required/Default |
---|---|---|
token | Your Logz.io log shipping token securely directs the data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
protocol | http or https . The value of this parameter affects the default of the port parameter. | http |
host | Use the listener URL specific to the region where your Logz.io account is hosted. Click to look up your listener URL. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | listener.logz.io |
port | Destination port. The default port depends on the protocol parameter: 8070 (for HTTP) or 8071 (for HTTPS) | 8070 / 8071 |
type | Declare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can't contain spaces. | nodejs |
sendIntervalMs | Time to wait between retry attempts, in milliseconds. | 2000 (2 seconds) |
bufferSize | Maximum number of messages the logger accumulates before sending them all as a bulk. | 100 |
numberOfRetries | Maximum number of retry attempts. | 3 |
debug | Set to true to print debug messages to the console. | false |
callback | A callback function to call when the logger encounters an unrecoverable error. The function API is function(err) , where err is the Error object. | -- |
timeout | Read/write/connection timeout, in milliseconds. | -- |
extraFields | JSON format. Adds your custom fields to each log. Format: extraFields : { field_1: "val_1", field_2: "val_2" , ... } | -- |
setUserAgent | Set to false to send logs without the user-agent field in the request header. | true |
Code example:
You can send log lines as a raw string or an object. For consistent and reliable parsing, we recommend sending them as objects:
var obj = {
message: 'Some log message',
param1: 'val1',
param2: 'val2',
tags : ['tag1']
};
logger.log(obj);
To send a raw string:
logger.log('This is a log message');
For serverless environments, such as AWS Lambda, Azure Functions, or Google Cloud Functions, include this line at the end of the run:
logger.sendAndClose();
This winston plugin is a wrapper for the logzio-nodejs appender, allowing you to use the Logz.io shipper with the winston logger framework in your Node.js app.
Configure winston-logzio
Before you begin, you'll need: Winston 3 (for Winston 2, see version v1.0.8). If you're using Typescript, follow the procedure to set up winston with Typescript.
Install the dependency:
npm install winston-logzio --save
Use the sample configuration and edit it according to your needs:
const winston = require('winston');
const LogzioWinstonTransport = require('winston-logzio');
const logzioWinstonTransport = new LogzioWinstonTransport({
level: 'info',
name: 'winston_logzio',
token: '<<LOG-SHIPPING-TOKEN>>',
host: '<<LISTENER-HOST>>',
});
const logger = winston.createLogger({
format: winston.format.simple(),
transports: [logzioWinstonTransport],
});
logger.log('warn', 'Just a test message');
For serverless environments, such as AWS Lambda, Azure Functions, or Google Cloud Functions, add await logger.info("API Called")
and include this line at the end of the run:
logger.close();
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
Replace
<<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to.Replace
<<LISTENER-HOST>>
with the host for your region.
Parameters
For a complete list of your options, see the configuration parameters below.👇
Parameter | Description | Required/Default |
---|---|---|
LogzioWinstonTransport | Determines the settings passed to the logzio-nodejs logger. Configure any parameters you want to send to winston when initializing the transport. | -- |
token | Your Logz.io log shipping token securely directs data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
protocol | http or https , affecting the default port parameter. | http |
host | Use the listener URL specific to the region where your Logz.io account is hosted. Click to look up your listener URL. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | listener.logz.io |
port | Destination port based on the protocol parameter: 8070 (for HTTP) or 8071 (for HTTPS) | 8070 / 8071 |
type | Declare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can't contain spaces. | nodejs |
sendIntervalMs | Time to wait between retry attempts, in milliseconds. | 2000 (2 seconds) |
bufferSize | Maximum number of messages the logger will accumulate before sending them all as a bulk. | 100 |
numberOfRetries | Maximum number of retry attempts. | 3 |
debug | To print debug messages to the console, true . Otherwise, false . | false |
callback | Callback function for unrecoverable errors. The function API is function(err) , where err is the Error object. | -- |
timeout | Read/write/connection timeout, in milliseconds. | -- |
extraFields | Adds custom fields to each log in JSON format: extraFields : { field_1: "val_1", field_2: "val_2" , ... } | -- |
setUserAgent | Set to false to send logs without the user-agent field in the request header. Set to false if sending data from Firefox browser. | true |
Additional configuration options
By default, the winston logger sends all logs to the console. Disable this by adding the following line to your code:
winston.remove(winston.transports.Console);
Log the last UncaughtException before Node exits:
var logzIOTransport = new (winstonLogzIO)(loggerOptions);
var logger = new(winston.Logger)({
transports: [
logzIOTransport
],
exceptionHandlers: [
logzIOTransport
],
exitOnError: true // set this to true
});
process.on('uncaughtException', function (err) {
logger.error("UncaughtException processing: %s", err);
logzIOTransport.flush( function(callback) {
process.exit(1);
});
});Configure HTTPS Shipping
var winston = require('winston');
var logzioWinstonTransport = require('winston-logzio');
// Replace these parameters with your configuration
var loggerOptions = {
token: '<<LOG-SHIPPING-TOKEN>>',
protocol: 'https',
host: '<<LISTENER-HOST>>',
port: '8071',
type: 'YourLogType'
};
winston.add(logzioWinstonTransport, loggerOptions);Add custom tags to winston-logzio
var obj = {
message: 'Your log message',
tags : ['tag1']
};
logger.log(obj);
Configure winston-logzio with Typescript
This winston plugin is a TypeScript-compatible wrapper for the logzio-nodejs appender, effectively integrating Logz.io shipper with your Node.js application. With winston-logzio, you can take advantage of the winston logger framework.
Before you begin, you'll need: Winston 3 (for Winston 2, see version v1.0.8).
Install the dependency:
npm install winston-logzio --save
Configure winston-logzio with Typescript. If you don't have a tsconfig.json
file, start by adding one. Run the following command:
tsc --init
On your tsconfig.json
file, under compilerOptions
ensure you have the esModuleInterop
flag set to true
or add it:
"compilerOptions": {
...
"esModuleInterop": true
}
Use the sample configuration and edit it according to your needs:
import winston from 'winston';
import LogzioWinstonTransport from 'winston-logzio';
const logzioWinstonTransport = new LogzioWinstonTransport({
level: 'info',
name: 'winston_logzio',
token: '<<LOG-SHIPPING-TOKEN>>',
host: '<<LISTENER-HOST>>',
});
const logger = winston.createLogger({
format: winston.format.simple(),
transports: [logzioWinstonTransport],
});
var obj = {
message: 'Your log message',
tags : ['tag1']
};
logger.log(obj);
logger.log('warn', 'Just a test message');
Parameters
For a complete list of your options, see the configuration parameters below.👇
Parameter | Description | Required/Default |
---|---|---|
LogzioWinstonTransport | Determines the settings passed to the logzio-nodejs logger. Configure any parameters you want to send to winston when initializing the transport. | -- |
token | Your Logz.io log shipping token securely directs data to your Logz.io account. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
protocol | http or https , affecting the default port parameter. | http |
host | Use the listener URL specific to the region where your Logz.io account is hosted. Click to look up your listener URL. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | listener.logz.io |
port | Destination port based on the protocol parameter: 8070 (for HTTP) or 8071 (for HTTPS) | 8070 / 8071 |
type | Declare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can't contain spaces. | nodejs |
sendIntervalMs | Time to wait between retry attempts, in milliseconds. | 2000 (2 seconds) |
bufferSize | Maximum number of messages the logger will accumulate before sending them all as a bulk. | 100 |
numberOfRetries | Maximum number of retry attempts. | 3 |
debug | To print debug messages to the console, true . Otherwise, false . | false |
callback | Callback function for unrecoverable errors. The function API is function(err) , where err is the Error object. | -- |
timeout | Read/write/connection timeout, in milliseconds. | -- |
extraFields | Adds custom fields to each log in JSON format: extraFields : { field_1: "val_1", field_2: "val_2" , ... } | -- |
setUserAgent | Set to false to send logs without the user-agent field in the request header. Set to false if sending data from Firefox browser. | true |
If you are using winston-logzio in a serverless service (e.g., AWS Lambda, Azure Functions, Google Cloud Functions), add this line at the end of each run to ensure proper logging.
await logger.info(“API Called”)
logger.close()
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
Replace
<<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to.Replace
<<LISTENER-HOST>>
with the host for your region.
Troubleshooting
To resolve errors related to the esModuleInterop
flag, ensure you run the appropriate tsconfig
file. Use one of the following commands:
tsc <file-name>.ts --esModuleInterop
or
tsc --project tsconfig.json
This integration uses the OpenTelemetry logging exporter to send logs to Logz.io via the OpenTelemetry Protocol (OTLP) listener.
Prerequisites
- Node
- A Node application
- An active account with Logz.io
If you need an example aplication to test this integration, please refer to our NodeJS OpenTelemetry repository.
Configure the instrumentation
Install the dependencies:
npm install --save @opentelemetry/api-logs
npm install --save @opentelemetry/sdk-logs
npm install --save @opentelemetry/exporter-logs-otlp-protoConfigure the Opentelemetry Collector. Create a logger file (e.g.,
logger.js
) in your project with the following configuration:const { LoggerProvider, SimpleLogRecordProcessor } = require('@opentelemetry/sdk-logs');
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-proto');
const { Resource } = require('@opentelemetry/resources');
const resource = new Resource({'service.name': 'YOUR-SERVICE-NAME'});
const loggerProvider = new LoggerProvider({ resource });
const otlpExporter = new OTLPLogExporter({
url: 'https://otlp-listener.logz.io/v1/logs',
headers: {
Authorization: 'Bearer <LOG-SHIPPING-TOKEN>',
'user-agent': 'logzio-javascript-logs-otlp'
}
});
loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(otlpExporter));
const logger = loggerProvider.getLogger('example_logger');
module.exports.logger = logger;Replace
YOUR-SERVICE-NAME
with the required service name.Your Logz.io log shipping token directs the data securely to your Logz.io Log Management account. The default token is auto-populated in the examples when you're logged into the Logz.io app as an Admin. Manage your tokens.
Update the `listener.logz.io` part in `https://otlp-listener.logz.io/v1/logs` with the URL for [your hosting region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region).
:::note
If your Logz.io account region is not us-east-1
, add your [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/ account-region/#available-regions) to the url
like so https://otlp-listener-<<REGION-CODE>>.logz.io/v1/logs
.
:::
- Run the application.
Check Logz.io for your logs
Allow some time for data ingestion, then open Open Search Dashboards.
Encounter an issue? See our log shipping troubleshooting guide.
This integration uses the OpenTelemetry logging exporter to send logs to Logz.io via the OpenTelemetry Protocol (OTLP) listener.
Prerequisites
- Node
- A Node application
- An active account with Logz.io
Configure the instrumentation
Install the dependencies:
npm install --save @opentelemetry/api-logs
npm install --save @opentelemetry/sdk-logs
npm install --save @opentelemetry/exporter-logs-otlp-proto
npm install --save @opentelemetry/resourcesConfigure the Opentelemetry Collector. Create a
logger.ts
file in your project (or add it to your existing configuration) with the following configuration:
import { LoggerProvider, SimpleLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-proto';
import { Resource } from '@opentelemetry/resources';
const resource = new Resource({ 'service.name': 'YOUR-SERVICE-NAME' });
const loggerProvider = new LoggerProvider({ resource });
const otlpExporter = new OTLPLogExporter({
url: 'https://otlp-listener.logz.io/v1/logs',
headers: {
Authorization: 'Bearer <LOG-SHIPPING-TOKEN>',
'user-agent': 'logzio-typescript-logs-otlp'
}
});
loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(otlpExporter));
const logger = loggerProvider.getLogger('example_logger');
export { logger };
Replace YOUR-SERVICE-NAME
with the required service name.
Your Logz.io log shipping token directs the data securely to your Logz.io Log Management account. The default token is auto-populated in the examples when you're logged into the Logz.io app as an Admin. Manage your tokens.
Update the `listener.logz.io` part in `https://otlp-listener.logz.io/v1/logs` with the URL for [your hosting region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region).
:::note
If your Logz.io account region is not us-east-1
, add your [region code](https://docs.logz.io/docs/user-guide/admin/hosting-regions/ account-region/#available-regions) to the url
like so https://otlp-listener-<<REGION-CODE>>.logz.io/v1/logs
.
:::
Once you have configured the logger, import it into your application (e.g.,
server.ts
) and start logging.Example of logging from your
server.ts
file:import express, { Request, Response } from 'express';
import { logger } from './logger/logger'; // Assuming logger.ts is in the logger folder
const app = express();
const PORT = process.env.PORT || 8080;
app.get('/', (req: Request, res: Response) => {
logger.emit({
body: 'Received a request at the root endpoint',
attributes: { severityText: 'info' },
});
res.send('Hello, Logz.io!');
});
app.listen(PORT, () => {
logger.emit({
body: `Server is running on http://localhost:${PORT}`,
attributes: { severityText: 'info' },
});
});
Check Logz.io for your logs
Allow some time for data ingestion, then open Open Search Dashboards.
Encounter an issue? See our log shipping troubleshooting guide.
Metrics
These examples use the OpenTelemetry JS SDK and are based on the OpenTelemetry exporter collector proto.
Before you begin, you'll need:
Node 14 or higher.
We recommend using this integration with the Logz.io Metrics backend, though it is compatible with any backend that supports the prometheusremotewrite
format.
Install the SDK package
npm install logzio-nodejs-metrics-sdk@0.5.0
npm install @opentelemetry/sdk-metrics@1.26.0
Initialize the exporter and meter provider
const { MeterProvider, PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const sdk = require('logzio-nodejs-metrics-sdk');
const collectorOptions = {
url: 'https://<<LISTENER-HOST>>:8053',
headers: {
"Authorization":"Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"
}
};
// Initialize the exporter
const metricExporter = new sdk.RemoteWriteExporter(collectorOptions);
// Initialize the meter provider
const meter = new MeterProvider({
readers: [
new PeriodicExportingMetricReader(
{
exporter: metricExporter,
exportIntervalMillis: 1000
})
],
}).getMeter('example-exporter');
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
- Replace
<<LISTENER-HOST>>
with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic. - Replace
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to. Look up your Metrics token.
Add required metrics to the code
You can use the following metrics:
Name | Behavior |
---|---|
Counter | Metric value can only increase or reset to 0, calculated per counter.Add(context,value,labels) request. |
UpDownCounter | Metric value can arbitrarily increment or decrement, calculated per updowncounter.Add(context,value,labels) request. |
Histogram | Metric values are captured by the histogram.Record(context,value,labels) function and calculated per request. |
For details on these metrics, refer to the OpenTelemetry documentation.
Insert the following code after initialization to add a metric:
Counter
const requestCounter = meter.createCounter('Counter', {
description: 'Example of a Counter',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some value
requestCounter.add(1,labels);
// In logzio Metrics you will see the following metric:
// Counter_total{environment: 'prod'} 1.0
UpDownCounter
const upDownCounter = meter.createUpDownCounter('UpDownCounter', {
description: 'Example of an UpDownCounter',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some values
upDownCounter.add(5,labels);
upDownCounter.add(-1,labels);
// In logzio you will see the following metric:
// UpDownCounter{environment: 'prod'} 4.0
Histogram:
const histogram = meter.createHistogram('test_histogram', {
description: 'Example of a histogram',
});
// Define some labels for your metrics
const labels = { environment: 'prod' };
// Record some values
histogram.record(30,labels);
histogram.record(20,labels);
// In logzio you will see the following metrics:
// test_histogram_sum{environment: 'prod'} 50.0
// test_histogram_count{environment: 'prod'} 2.0
// test_histogram_avg{environment: 'prod'} 25.0
View your metrics
Run your application to start sending metrics to Logz.io.
Allow some time for data ingestion, then check your Metrics dashboard.
Install the pre-built dashboard for enhanced observability.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Traces
Auto-instrument Node.js and send Traces to Logz.io
- OpenTelemetry Collector
- Docker
- Helm
- ECS
Before you begin, you'll need:
- A Node.js application without instrumentation.
- An active Logz.io account.
- Port
4318
available on your host system. - A name for your tracing service to identify traces in Logz.io.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Download instrumentation packages
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/sdk-trace-base
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In your application's directory, create a file named tracer.js
with the following configuration.
Replace <<YOUR-SERVICE-NAME>>
with your service name.
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/sdk-trace-base");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces"
});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"<<YOUR-SERVICE-NAME>>",
}),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Download and configure the OpenTelemetry collector
Create a directory on your Node.js host, download the appropriate OpenTelemetry collector for your OS, and create a config.yaml
file with the following parameters:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Start the collector
<path/to>/otelcontribcol_<VERSION-NAME> --config ./config.yaml
- Replace
<path/to>
with the path to the directory where you downloaded the collector. - Replace
<VERSION-NAME>
with the version name of the collector applicable to your system, e.g.otelcontribcol_darwin_amd64
.
Run the application
Run this command to generate traces:
node --require './tracer.js' <YOUR-APPLICATION-FILE-NAME>.js
View your traces
Give your traces some time to ingest, and then open your Tracing account.
This integration auto-instruments your Node.js app and runs a containerized OpenTelemetry collector to send traces to Logz.io. Ensure both application and collector containers are on the same network.
Before you begin, you'll need:
- A Node.js application without instrumentation.
- An active Logz.io account.
- Port
4317
available on your host system. - A name for your tracing service to identify traces in Logz.io.
Download instrumentation packages
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/sdk-trace-base
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In your application's directory, create a file named tracer.js
with the following configuration.
Replace <<YOUR-SERVICE-NAME>>
with your service name.
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/sdk-trace-base");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces"
});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"<<YOUR-SERVICE-NAME>>",
}),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Pull the Docker image for the OpenTelemetry collector
docker pull otel/opentelemetry-collector-contrib:0.111.0
Create a configuration file
Create a file config.yaml
with the following content:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
logging:
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logging, logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Tail Sampling
tail_sampling
defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.
Additional policy configurations can be added to the processor. For more details, refer to the OpenTelemetry Documentation.
The configurable parameters in the Logz.io default configuration are:
Parameter | Description | Default |
---|---|---|
threshold_ms | Threshold for the span latency - traces slower than this value will be included. | 1000 |
sampling_percentage | Percentage of traces to sample using the probabilistic policy. | 10 |
If you already have an OpenTelemetry installation, add the following parameters to the configuration file of your existing OpenTelemetry collector:
- Under the
exporters
list
logzio/traces:
account_token: <<TRACING-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>>
headers:
user-agent: logzio-opentelemetry-traces
- Under the
service
list:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Here is an example configuration file:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the container
Mount the config.yaml
as volume to the docker run
command and run it as follows.
Linux
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.111.0
Replace <PATH-TO>
to the path to the config.yaml
file on your system.
Windows
docker run \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
-p 55678-55680:55678-55680 \
-p 1777:1777 \
-p 9411:9411 \
-p 9943:9943 \
-p 6831:6831 \
-p 6832:6832 \
-p 14250:14250 \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.111.0
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Run the application
When running the OTEL collector in a Docker container, your application should run in separate containers on the same host network. Ensure all containers share the same network. Using Docker Compose ensures that all containers, including the OTEL collector, share the same network configuration automatically.
Run the application to generate traces:
node --require './tracer.js' <YOUR-APPLICATION-FILE-NAME>.js
Check Logz.io for your traces
Give your traces some time to ingest, and then open your Tracing account.
Configuration using Helm
You can use a Helm chart to ship traces to Logz.io via the OpenTelemetry collector. Helm is a tool for managing packages of preconfigured Kubernetes resources using charts.
logzio-k8s-telemetry allows you to ship traces from your Kubernetes cluster to Logz.io with the OpenTelemetry collector.
This chart is a fork of the opentelemtry-collector Helm chart. The main repository for Logz.io helm charts are logzio-helm.
This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Collector Core.
Deploy the Helm chart
Add logzio-helm
repo as follows:
helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
Run the Helm deployment code
helm install \
--set secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set secrets.TracesToken=<<TRACING-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set secrets.env_id=<<ENV_ID>> \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
<<LOGZIO_ACCOUNT_REGION_CODE>>
- Your Logz.io account region code. Available regions.
Define the logzio-k8s-telemetry dns name
In most cases, the service name will be logzio-k8s-telemetry.default.svc.cluster.local
, where default
is the namespace where you deployed the helm chart and svc.cluster.name
is your cluster domain name.
To find your cluster domain name, run the following command:
kubectl run -it --image=k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 --restart=Never shell -- \
shell -c 'nslookup kubernetes.default | grep Name | sed "s/Name:\skubernetes.default//"'
This command deploys a temporary pod to extract your cluster domain name. You can remove the pod after retrieving the domain name.
Download instrumentation packages
npm install --save @opentelemetry/api
npm install --save @opentelemetry/instrumentation
npm install --save @opentelemetry/sdk-trace-base
npm install --save @opentelemetry/exporter-trace-otlp-http
npm install --save @opentelemetry/resources
npm install --save @opentelemetry/semantic-conventions
npm install --save @opentelemetry/auto-instrumentations-node
npm install --save @opentelemetry/sdk-node
Create a tracer file
In your application's directory, create a file named tracer.js
with the following configuration.
Replace <<YOUR-SERVICE-NAME>>
with your service name.
"use strict";
const {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} = require("@opentelemetry/sdk-trace-base");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-http");
const { Resource } = require("@opentelemetry/resources");
const {
SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const opentelemetry = require("@opentelemetry/sdk-node");
const {
getNodeAutoInstrumentations,
} = require("@opentelemetry/auto-instrumentations-node");
const exporter = new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces"
});
const provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]:
"<<YOUR-SERVICE-NAME>>",
}),
});
// export spans to console (useful for debugging)
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// export spans to opentelemetry collector
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.register();
const sdk = new opentelemetry.NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk
.start()
console.log("Tracing initialized");
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
Check Logz.io for your traces
Give your traces some time to ingest, and then open your Tracing account.
Customizing Helm chart parameters
To customize the Helm chart parameters, you have the following options:
Specify parameters using the
--set key=value[,key=value]
argument tohelm install
.Edit the
values.yaml
.Override default values with your own
my_values.yaml
and apply it in thehelm install
command.
You can add the following optional parameters as environment variables if needed:
Parameter | Description |
---|---|
secrets.SamplingLatency | Threshold for the span latency - all traces slower than the threshold value will be filtered in. Default 500. |
secrets.SamplingProbability | Sampling percentage for the probabilistic policy. Default 10. |
Code example:
You can run the logzio-k8s-telemetry chart with your custom configuration file, which will override the default values.yaml
settings.
For example:
The collector will sample ALL traces where is some span with error with this example configuration.
baseCollectorConfig:
processors:
tail_sampling:
policies:
[
{
name: error-in-policy,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: slow-traces-policy,
type: latency,
latency: {threshold_ms: 400}
},
{
name: health-traces,
type: and,
and: {
and_sub_policy:
[
{
name: ping-operation,
type: string_attribute,
string_attribute: { key: http.url, values: [ /health ] }
},
{
name: main-service,
type: string_attribute,
string_attribute: { key: service.name, values: [ main-service ] }
},
{
name: probability-policy-1,
type: probabilistic,
probabilistic: {sampling_percentage: 1}
}
]
}
},
{
name: probability-policy,
type: probabilistic,
probabilistic: {sampling_percentage: 20}
}
]
helm install -f <PATH-TO>/my_values.yaml \
--set secrets.LogzioRegion=<<LOGZIO_ACCOUNT_REGION_CODE>> \
--set secrets.TracesToken=<<TRACING-SHIPPING-TOKEN>> \
--set traces.enabled=true \
--set secrets.env_id=<<ENV_ID>> \
logzio-k8s-telemetry logzio-helm/logzio-k8s-telemetry
Replace <PATH-TO>
with the path to your custom values.yaml
file.
Replace <<TRACING-SHIPPING-TOKEN>>
with the token of the account you want to ship to.
Replace <<LOGZIO_ACCOUNT_REGION_CODE>>
with the applicable region code.
Uninstalling the Chart
To remove all Kubernetes components associated with the chart and delete the release, use the uninstall command.
To uninstall the logzio-k8s-telemetry
deployment, run:
helm uninstall logzio-k8s-telemetry
This guide provides an overview of deploying your Node.js application on Amazon ECS, using OpenTelemetry to collect and send tracing data to Logz.io. It offers a step-by-step process for setting up OpenTelemetry instrumentation and deploying both the application and OpenTelemetry Collector sidecar in an ECS environment.
Prerequisites
Before you begin, ensure you have the following prerequisites in place:
- AWS CLI configured with access to your AWS account.
- Docker installed for building images.
- AWS IAM role with sufficient permissions to create and manage ECS resources.
- Amazon ECR repository for storing the Docker images.
- Node.js and npm installed locally for development and testing.
For a complete example, refer to this repo.
Architecture Overview
The deployment will involve two main components:
Node.js Application Container: A container running your Node.js application, instrumented with OpenTelemetry to capture traces.
OpenTelemetry Collector Sidecar: A sidecar container that receives telemetry data from the application, processes it, and exports it to Logz.io.
project-root/
├── nodejs-app/ # Your Node.js application directory
│ ├── app.js # Node.js application entry point
│ ├── tracing.js # OpenTelemetry tracing setup
│ ├── Dockerfile # Dockerfile to build Node.js application image
│ └── package.json # Node.js dependencies, includes OpenTelemetry
├── ecs/
│ └── task-definition.json # ECS task definition file
└── otel-collector
├── collector-config.yaml # OpenTelemetry Collector configuration
└── Dockerfile # Dockerfile for the Collector
Steps to Deploy the Application
Project Structure Setup
Ensure your project structure follows the architecture outline. You should have a directory for your Node.js application and a separate directory for the OpenTelemetry Collector.
Set Up OpenTelemetry Instrumentation
Add OpenTelemetry instrumentation to your Node.js application by including a tracing setup
tracing.js
. This file will initialize OpenTelemetry and configure the trace exporter to send traces to the OpenTelemetry Collector.
tracing.js
"use strict";
const { NodeSDK } = require("@opentelemetry/sdk-node");
const {
OTLPTraceExporter,
} = require("@opentelemetry/exporter-trace-otlp-grpc");
const { diag, DiagConsoleLogger, DiagLogLevel } = require("@opentelemetry/api");
const { Resource } = require("@opentelemetry/resources");
// Optional: Enable diagnostic logging for debugging
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
function startTracing() {
const traceExporter = new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "grpc://localhost:4317",
});
const sdk = new NodeSDK({
traceExporter,
resource: new Resource({
"service.name": "nodejs-app",
}),
});
sdk
.start()
.then(() => console.log("Tracing initialized"))
.catch((error) => console.log("Error initializing tracing", error));
// Optional: Gracefully shutdown tracing on process exit
process.on("SIGTERM", () => {
sdk
.shutdown()
.then(() => console.log("Tracing terminated"))
.catch((error) => console.log("Error terminating tracing", error))
.finally(() => process.exit(0));
});
}
module.exports = { startTracing };
The tracing.js
file initializes OpenTelemetry tracing, configuring the OTLP exporter to send trace data to the OpenTelemetry Collector.
Include the tracing setup at the entry point of your application app.js
, ensuring tracing starts before any other logic.
Dockerfile
# Use Node.js LTS version
FROM node:16-alpine
# Set the working directory
WORKDIR /app
# Copy and install dependencies
COPY package*.json ./
RUN npm install --production
# Copy the application code
COPY . .
# Expose the application port
EXPOSE 3000
# Set environment variables for OpenTelemetry
ENV OTEL_TRACES_SAMPLER=always_on
ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
ENV OTEL_RESOURCE_ATTRIBUTES="service.name=nodejs-app"
# Start the application
CMD ["npm", "start"]
The Dockerfile installs the necessary dependencies, sets the environment variables required for OpenTelemetry configuration, and starts the application.
Configure the OpenTelemetry Collector
The OpenTelemetry Collector receives traces from the application and exports them to Logz.io. Create a collector-config.yaml
file to define how the Collector should handle traces.
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
exporters:
logzio/traces:
account_token: "<<TRACING-SHIPPING-TOKEN>>"
region: "<<LOGZIO_ACCOUNT_REGION_CODE>>"
headers:
user-agent: logzio-opentelemetry-traces
processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp]
processors: [tail_sampling, batch]
exporters: [logzio/traces]
telemetry:
logs:
level: info
Build Docker Images
Build Docker images for both the Node.js application and the OpenTelemetry Collector:
# Build Node.js application image
cd nodejs-app/
docker build --platform linux/amd64 -t your-nodejs-app:latest .
# Build OpenTelemetry Collector image
cd ../otel-collector/
docker build --platform linux/amd64 -t otel-collector:latest .
Push Docker Images to Amazon ECR
Push both images to your Amazon ECR repository:
# Authenticate Docker to Amazon ECR
aws ecr get-login-password --region <aws-region> | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
# Tag and push images
docker tag your-nodejs-app:latest <aws_account_id>.dkr.ecr.<region>.amazonaws.com/your-nodejs-app:latest
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/your-nodejs-app:latest
docker tag otel-collector:latest <aws_account_id>.dkr.ecr.<region>.amazonaws.com/otel-collector:latest
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/otel-collector:latest
Log Group Creation:
Create log groups for your Nodejs application and OpenTelemetry Collector in CloudWatch.
aws logs create-log-group --log-group-name /ecs/nodejs-app
aws logs create-log-group --log-group-name /ecs/otel-collector
Define ECS Task
Create a task definition (task-definition.json) for ECS that defines both the Node.js application container and the OpenTelemetry Collector container.
task-definition.json
{
"family": "your-nodejs-app-task",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::<aws_account_id>:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "your-nodejs-app",
"image": "<aws_account_id>.dkr.ecr.<region>.amazonaws.com/your-nodejs-app:latest",
"cpu": 128,
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/your-nodejs-app",
"awslogs-region": "<aws-region>",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"name": "otel-collector",
"image": "<aws_account_id>.dkr.ecr.<aws-region>.amazonaws.com/otel-collector:latest",
"cpu": 128,
"essential": false,
"command": ["--config=/etc/collector-config.yaml"],
"environment": [
{
"name": "LOGZIO_TRACING_TOKEN",
"value": "<logzio_tracing_token>"
},
{
"name": "LOGZIO_REGION",
"value": "<logzio_region>"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/otel-collector",
"awslogs-region": "<aws-region>",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}
Deploy to ECS
Create an ECS Cluster: Create a cluster to deploy your containers:
aws ecs create-cluster --cluster-name your-app-cluster --region <aws-region>
Register the Task Definition:
aws ecs register-task-definition --cli-input-json file://ecs/task-definition.json
Create ECS Service: Deploy the task definition using a service:
aws ecs create-service \
--cluster your-app-cluster \
--service-name your-nodejs-app-service \
--task-definition your-nodejs-app-task \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[\"YOUR_SUBNET_ID\"],securityGroups=[\"YOUR_SECURITY_GROUP_ID\"],assignPublicIp=ENABLED}" \
--region <aws-region> \
Verify Application and Tracing
After deploying, run your application to generate activity that will create tracing data. Wait a few minutes, then check the Logz.io dashboard to confirm that traces are being sent correctly.
Troubleshooting
If traces are not being sent despite instrumentation, follow these steps:
Collector not installed
The OpenTelemetry collector may not be installed on your system.
Suggested remedy
Ensure the OpenTelemetry collector is installed and configured to receive traces from your hosts.
Collector path not configured
The collector may not have the correct endpoint configured for the receiver.
Suggested remedy
Verify the configuration file lists the following endpoints:
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"Ensure the endpoint is correctly specified in the instrumentation code. Use Logz.io's integrations hub to ship your data.
Traces not generated
The instrumentation code may be incorrect even if the collector and endpoints are properly configured.
Suggested remedy
- Check if the instrumentation can output traces to a console exporter.
- Use a web-hook to check if the traces are going to the output.
- Check the metrics endpoint
(http://<<COLLECTOR-HOST>>:8888/metrics)
to see spans received and sent. Replace<<COLLECTOR-HOST>>
with your collector's address.
If issues persist, refer to Logz.io's integrations hub and re-instrument the application.
Wrong exporter/protocol/endpoint
Incorrect exporter, protocol, or endpoint configuration.
The correct endpoints are:
receivers:
otlp:
protocols:
grpc:
endpoint: "<<COLLECTOR-URL>>:4317"
http:
endpoint: "<<COLLECTOR-URL>>:4318/v1/traces"
Suggested remedy
- Activate
debug
logs in the collector's configuration
service:
telemetry:
logs:
level: "debug"
Debug logs indicate the status code of the http/https post request.
If the post request is not successful, check if the collector is configured to use the correct exporter, protocol, and/or endpoint.
A successful post request will log status code 200; failure reasons will also be logged.
Collector failure
The collector may fail to generate traces despite sending debug
logs.
Suggested remedy
On Linux and MacOS, view collector logs:
journalctl | grep otelcol
To only see errors:
journalctl | grep otelcol | grep Error
Otherwise, navigate to the following URL - http://localhost:8888/metrics
This is the endpoint to access the collector metrics in order to see different events that might happen within the collector - receiving spans, sending spans as well as other errors.
Exporter failure
The exporter configuration may be incorrect, causing trace export issues.
Suggested remedy
If you are unable to export traces to a destination, this may be caused by the following:
- There is a network configuration issue.
- The exporter configuration is incorrect.
- The destination is unavailable.
To investigate this issue:
- Make sure that the
exporters
andservice: pipelines
are configured correctly. - Check the collector logs and
zpages
for potential issues. - Check your network configuration, such as firewall, DNS, or proxy.
Metrics like the following can provide insights:
# HELP otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.
# TYPE otelcol_exporter_enqueue_failed_metric_points counter
otelcol_exporter_enqueue_failed_metric_points{exporter="logging",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest"} 0
Receiver failure
The receiver may not be configured correctly.
Suggested remedy
If you are unable to receive data, this may be caused by the following:
- There is a network configuration issue.
- The receiver configuration is incorrect.
- The receiver is defined in the receivers section, but not enabled in any pipelines.
- The client configuration is incorrect.
Metrics for receivers can help diagnose issues:
# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.
# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 34
# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.
# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0582dab5-efb8-4061-94a7-60abdc9867e1",service_version="latest",transport="grpc"} 0