Skip to main content

AWS EC2

Send your AWS EC2 logs and metrics using OpenTelemetry collector service

note

For a much easier and more efficient way to collect and send metrics, consider using the Logz.io telemetry collector.

Follow these steps to manually configure OpenTelemetry on your Linux machine:

Create a Logz.io directory:

sudo mkdir /opt/logzio-agent

Download OpenTelemetry tar.gz:

curl -fsSL "https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.111.0/otelcol-contrib_0.111.0_linux_amd64.tar.gz" >./otelcol-contrib.tar.gz

Extract the OpenTelemetry binary:

sudo tar -zxf ./otelcol-contrib.tar.gz --directory /opt/logzio-agent otelcol-contrib

Create the OpenTelemetry config file:

sudo touch /opt/logzio-agent/otel_config.yaml

And copy the following OpenTelemetry config content into the config file.

Replace <<LOG-SHIPPING-TOKEN>>, <<LISTENER-HOST>>, and <<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with the relevant parameters from your Logz.io account.

receivers:
filelog/aws_ec2_system:
include:
- /var/log/*.log
include_file_path: true
operators:
- type: move
from: attributes["log.file.name"]
to: attributes["log_file_name"]
- type: move
from: attributes["log.file.path"]
to: attributes["log_file_path"]
attributes:
type: agent-ec2-linux
hostmetrics/aws_ec2_system:
collection_interval: 15s
scrapers:
cpu:
metrics:
system.cpu.utilization:
enabled: true
disk:
load:
filesystem:
memory:
metrics:
system.memory.utilization:
enabled: true
network:
paging:
process:
mute_process_name_error: true
mute_process_exe_error: true
mute_process_io_error: true
processors:
resourcedetection/system:
detectors: ["system"]
system:
hostname_sources: ["os"]
resourcedetection/ec2:
detectors: ["ec2"]
ec2:
tags:
- ^*$
filter:
metrics:
include:
match_type: strict
metric_names: ["system.cpu.time", "system.cpu.load_average.1m", "system.cpu.load_average.5m", "system.cpu.load_average.15m", "system.cpu.utilization", "system.memory.usage", "system.memory.utilization", "system.filesystem.usage", "system.disk.io", "system.disk.io_time", "system.disk.operation_time", "system.network.connections", "system.network.io", "system.network.packets", "system.network.errors", "process.cpu.time", "process.memory.usage", "process.disk.io", "process.memory.usage", "process.memory.virtual"]
exporters:
logging:
logzio/logs:
account_token: <<LOG-SHIPPING-TOKEN>>
region: <<LOGZIO_ACCOUNT_REGION_CODE>> # Default is US
prometheusremotewrite:
endpoint: https://<<LISTENER-HOST>>:8053
headers:
Authorization: Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
resource_to_telemetry_conversion:
enabled: true
target_info:
enabled: false
service:
pipelines:
logs:
receivers:
- filelog/aws_ec2_system
processors:
- resourcedetection/system
- resourcedetection/ec2
exporters: [logzio/logs]
metrics:
receivers:
- hostmetrics/aws_ec2_system
processors:
- resourcedetection/system
- resourcedetection/ec2
- filter
exporters: [prometheusremotewrite]
telemetry:
logs:
level: "debug"
metrics:
address: localhost:8888
note

If you already running OpenTelemetry metrics on port 8888, you will need to edit the address field in the config file.

Important

The IAM role assigned to the EC2 instance must include the ec2:DescribeTags permission in its policy.

Create service file:

sudo touch /etc/systemd/system/logzioOTELCollector.service

And copy the service file's content:

[Unit]

Description=OTEL collector for collecting logs/metrics and exporting them to Logz.io.

[Service]

ExecStart=/opt/logzio-agent/otelcol-contrib --config /opt/logzio-agent/otel_config.yaml

[Install]

WantedBy=multi-user.target

Manage your OpenTelemetry on Linux

To manage OpenTelemetry on your machine, use the following commands:

DescriptionCommand
Start servicesudo systemctl start logzioOTELCollector
Stop servicesudo systemctl stop logzioOTELCollector
Service logssudo systemctl status -l logzioOTELCollector
Delete servicesudo systemctl stop logzioOTELCollector sudo systemctl disable logzioOTELCollector sudo systemctl reset-failed logzioOTELCollector 2>/dev/null sudo rm /etc/systemd/system/logzioOTELCollector.service 2>/dev/null sudo rm /usr/lib/systemd/system/logzioOTELCollector.service 2>/dev/null sudo rm /etc/init.d/logzioOTELCollector 2>/dev/null

Send AWS EC2 metrics to Logz.io via Cloudwatch Metrics Stream

Deploy this integration to send your Amazon EC2 metrics to Logz.io.

This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EC2 metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.

Before you begin, you'll need:

  • An active Logz.io account

Configure AWS to forward metrics to Logz.io

1. Set the required minimum IAM permissions

configured the minimum required IAM permissions as follows:

  • Amazon S3:
    • s3:CreateBucket
    • s3:DeleteBucket
    • s3:PutObject
    • s3:GetObject
    • s3:DeleteObject
    • s3:ListBucket
    • s3:AbortMultipartUpload
    • s3:GetBucketLocation
  • AWS Lambda:
    • lambda:CreateFunction
    • lambda:DeleteFunction
    • lambda:InvokeFunction
    • lambda:GetFunction
    • lambda:UpdateFunctionCode
    • lambda:UpdateFunctionConfiguration
    • lambda:AddPermission
    • lambda:RemovePermission
    • lambda:ListFunctions
  • Amazon CloudWatch:
    • cloudwatch:PutMetricData
    • cloudwatch:PutMetricStream
    • logs:CreateLogGroup
    • logs:CreateLogStream
    • logs:PutLogEvents
    • logs:DeleteLogGroup
    • logs:DeleteLogStream
  • AWS Kinesis Firehose:
    • firehose:CreateDeliveryStream
    • firehose:DeleteDeliveryStream
    • firehose:PutRecord
    • firehose:PutRecordBatch
  • IAM:
    • iam:PassRole
    • iam:CreateRole
    • iam:DeleteRole
    • iam:AttachRolePolicy
    • iam:DetachRolePolicy
    • iam:GetRole
    • iam:CreatePolicy
    • iam:DeletePolicy
    • iam:GetPolicy
  • Amazon CloudFormation:
    • cloudformation:CreateStack
    • cloudformation:DeleteStack
    • cloudformation:UpdateStack
    • cloudformation:DescribeStacks
    • cloudformation:DescribeStackEvents
    • cloudformation:ListStackResources

2. Create stack in the relevant region

To deploy this project, click the button that matches the region you wish to deploy your stack to:

RegionDeployment
us-east-1Deploy to AWS
us-east-2Deploy to AWS
us-west-1Deploy to AWS
us-west-2Deploy to AWS
eu-central-1Deploy to AWS
eu-central-2Deploy to AWS
eu-north-1Deploy to AWS
eu-west-1Deploy to AWS
eu-west-2Deploy to AWS
eu-west-3Deploy to AWS
eu-south-1Deploy to AWS
eu-south-2Deploy to AWS
sa-east-1Deploy to AWS
ap-northeast-1Deploy to AWS
ap-northeast-2Deploy to AWS
ap-northeast-3Deploy to AWS
ap-south-1Deploy to AWS
ap-south-2Deploy to AWS
ap-southeast-1Deploy to AWS
ap-southeast-2Deploy to AWS
ap-southeast-3Deploy to AWS
ap-southeast-4Deploy to AWS
ap-east-1Deploy to AWS
ca-central-1Deploy to AWS
ca-west-1Deploy to AWS
af-south-1Deploy to AWS
me-south-1Deploy to AWS
me-central-1Deploy to AWS
il-central-1Deploy to AWS

3. Specify stack details

Specify the stack details as per the table below, check the checkboxes and select Create stack.

ParameterDescriptionRequired/Default
logzioListenerLogz.io listener URL for your region. (For more details, see the regions page. e.g., https://listener.logz.io:8053Required
logzioTokenYour Logz.io metrics shipping token.Required
awsNamespacesComma-separated list of AWS namespaces to monitor. See this list of namespaces. Use value all-namespaces to automatically add all namespaces.At least one of awsNamespaces or customNamespace is required
customNamespaceA custom namespace for CloudWatch metrics. Used to specify a namespace unique to your setup, separate from the standard AWS namespaces.At least one of awsNamespaces or customNamespace is required
logzioDestinationYour Logz.io destination URL. Choose the relevant endpoint from the drop down list based on your Logz.io account region.Required
httpEndpointDestinationIntervalInSecondsBuffer time in seconds before Kinesis Data Firehose delivers data.60
httpEndpointDestinationSizeInMBsBuffer size in MBs before Kinesis Data Firehose delivers data.5
debugModeEnable debug mode for detailed logging (true/false).false

4. View your metrics

Allow some time for data ingestion, then open your Logz.io metrics account.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.