AWS ECS Fargate
Amazon ECS Fargate is a serverless compute engine for containers that lets you run tasks without managing servers. Use it to send logs, metrics, and traces to Logz.io to monitor your data.
Send Logs, Metrics, and Traces via OpenTelemetry as side car
1. Set Up the SSM Parameter
Go to your AWS System Manager > Parameter Store to create an SSM Parameter to store the OTEL configuration:
- Set the Name to
logzioOtelConfig.yaml
. - Keep Type as
string
and Data type astext
. - In the Value field, use the following configuration as a starting point, adjusting values as needed for your environment:
receivers:
awsxray:
endpoint: '0.0.0.0:2000'
transport: udp
otlp:
protocols:
grpc:
endpoint: '0.0.0.0:4317'
http:
endpoint: '0.0.0.0:4318'
awsecscontainermetrics: null
fluentforward:
endpoint: 'unix:///var/run/fluent.sock'
processors:
batch:
send_batch_size: 10000
timeout: 1s
tail_sampling:
policies:
- name: policy-errors
type: status_code
status_code:
status_codes:
- ERROR
- name: policy-slow
type: latency
latency:
threshold_ms: 1000
- name: policy-random-ok
type: probabilistic
probabilistic:
sampling_percentage: 10
transform/log_type:
error_mode: ignore
log_statements:
- context: resource
statements:
- set(resource.attributes["type"], "ecs-fargate") where resource.attributes["type"] == nil
exporters:
logzio/traces:
account_token: '${LOGZIO_TRACE_TOKEN}'
region: '${LOGZIO_REGION}'
prometheusremotewrite:
endpoint: '${LOGZIO_LISTENER}'
external_labels:
aws_env: ecs-fargate
headers:
Authorization: 'Bearer ${LOGZIO_METRICS_TOKEN}'
resource_to_telemetry_conversion:
enabled: true
target_info:
enabled: false
logzio/logs:
account_token: '${LOGZIO_LOGS_TOKEN}'
region: '${LOGZIO_REGION}'
service:
pipelines:
traces:
receivers:
- awsxray
- otlp
processors:
- batch
exporters:
- logzio/traces
metrics:
receivers:
- otlp
- awsecscontainermetrics
processors:
- batch
exporters:
- prometheusremotewrite
logs:
receivers:
- fluentforward
- otlp
processors:
- transform/log_type
- batch
exporters:
- logzio/logs
telemetry:
logs:
level: info
Save the new SSM parameter and keep its ARN handy - you’ll need it for the next step.
2. Grant ECS Task Access to SSM
Create an IAM role to allow the ECS task to access the SSM Parameter.
Go to IAM > Policies.
Click Create policy, choose the JSON tab under Specify permissions, and paste the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": [
"<ARN_OF_SECRET_PARAMETER_FROM_STEP_1>"
]
}
]
}
Create the policy and give it a name (e.g., LogzioOtelSSMReadAccess
).
Go to IAM > Roles and either:
- Attach the new policy to your existing ECS task role, or
- Create a new IAM role for the ECS task and attach the policy during setup.
If you created a new role, save its ARN — you’ll need it in the next step.
3. Add OpenTelemetry to Task Definition
Update your existing ECS tasks to include the OpenTelemetry Collector by:
- Adding a FireLens log configuration to any of your existing task definition
- Adding a sidecar container running the OpenTelemetry Collector
{
"family": "<YOUR_WANTED_TASK_DEFINITION_FAMILY_NAME>",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ecsTaskExecutionRole",
"taskRoleArn": "<TASK_ROLE_ARN_FROM_STEP_2>",
"containerDefinitions": [
{
… <Existing container definition>,
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "opentelemetry"
}
}
},
{
"name": "otel-collector",
"image": "otel/opentelemetry-collector-contrib",
"cpu": 0,
"essential": false,
"command": [
"--config",
"env:OTEL_CONFIG"
],
"environment": [
{
"name": "LOGZIO_LOGS_TOKEN",
"value": "${LOGZIO_LOGS_TOKEN}"
},
{
"name": "LOGZIO_METRICS_TOKEN",
"value": "${LOGZIO_METRICS_TOKEN}"
},
{
"name": "LOGZIO_TRACE_TOKEN",
"value": "${LOGZIO_TRACE_TOKEN}"
},
{
"name": "LOGZIO_REGION",
"value": "${LOGZIO_REGION}"
},
{
"name": "LOGZIO_LISTENER",
"value": "${LOGZIO_LISTENER}"
},
],
"secrets": [
{
"name": "OTEL_CONFIG",
"valueFrom": "logzioOtelConfig.yaml"
}
],
// Optional: Use this to keep logs for debugging and troubleshooting
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/otel-collector",
"awslogs-region": "<AWS-REGION>",
"awslogs-stream-prefix": "ecs"
}
}
},
"firelensConfiguration": {
"type": "fluentbit"
}
]
}
If you’d like to use a centralized log collection setup instead of the OpenTelemetry Collector, reach out to Logz.io Support or your Customer Success Manager for guidance.
Send Logs and Metrics via Kineses Firehose
- Send Logs
- Send Metrics
This project deploys instrumentation that allows shipping Cloudwatch logs to Logz.io, with a Firehose Delivery Stream. It uses a Cloudformation template to create a Stack that deploys:
- Firehose Delivery Stream with Logz.io as the stream's destination.
- Lambda function that adds Subscription Filters to Cloudwatch Log Groups, as defined by user's input.
- Roles, log groups, and other resources that are necessary for this instrumentation.
If you want to send logs from specific log groups, use customLogGroups
instead of services
. Since specifying services
will automatically send all logs from those services, regardless of any custom log groups you define.
Auto Deploy via CloudFormation
To deploy this project, click the button that matches the region you wish to deploy your stack to:
Set Stack Parameters
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioToken | The token of the account you want to ship logs to. | Required |
logzioListener | Listener host. | Required |
logzioType | The log type you'll use with this Lambda. This can be a built-in log type, or a custom log type. | logzio_firehose |
services | A comma-seperated list of services you want to collect logs from. Supported options are: apigateway , rds , cloudhsm , cloudtrail , codebuild , connect , elasticbeanstalk , ecs , eks , aws-glue , aws-iot , lambda , macie , amazon-mq . | - |
customLogGroups | A comma-separated list of custom log groups to collect logs from, or the ARN of the Secret parameter (explanation below) storing the log groups list if it exceeds 4096 characters. Note: You can also specify a prefix of the log group names by using a wildcard at the end (e.g., prefix* ). This will match all log groups that start with the specified prefix. | - |
useCustomLogGroupsFromSecret | If you want to provide list of customLogGroups which exceeds 4096 characters, set to true and configure your customLogGroups as defined below. | false |
triggerLambdaTimeout | The amount of seconds that Lambda allows a function to run before stopping it, for the trigger function. | 60 |
triggerLambdaMemory | Trigger function's allocated CPU proportional to the memory configured, in MB. | 512 |
triggerLambdaLogLevel | Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic | info |
httpEndpointDestinationIntervalInSeconds | The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination | 60 |
httpEndpointDestinationSizeInMBs | The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination | 5 |
AWS limits every log group to have up to 2 subscription filters. If your chosen log group already has 2 subscription filters, the trigger function won't be able to add another one.
Handle Large Log Group Lists
If your customLogGroups
list exceeds the 4096 characters limit, follow the below steps:
- Open AWS Secret Manager
- Click
Store a new secret
- Choose
Other type of secret
- For
key
uselogzioCustomLogGroups
- In
value
store your comma-separated custom log groups list - Name your secret, for example as
LogzioCustomLogGroups
- Copy the new secret's ARN
- Choose
- In your stack, Set:
customLogGroups
to your secret ARN that you copied in step 2useCustomLogGroupsFromSecret
totrue
Deploy this integration to send your AWS ECS Metrics via Firehose to Logz.io.
This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon Kinesis Data Firehose metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.
Before you begin, you'll need:
- An active Logz.io account
Configure AWS to forward metrics to Logz.io
1. Set the required minimum IAM permissions
configured the minimum required IAM permissions as follows:
- Amazon S3:
s3:CreateBucket
s3:DeleteBucket
s3:PutObject
s3:GetObject
s3:DeleteObject
s3:ListBucket
s3:AbortMultipartUpload
s3:GetBucketLocation
- AWS Lambda:
lambda:CreateFunction
lambda:DeleteFunction
lambda:InvokeFunction
lambda:GetFunction
lambda:UpdateFunctionCode
lambda:UpdateFunctionConfiguration
lambda:AddPermission
lambda:RemovePermission
lambda:ListFunctions
- Amazon CloudWatch:
cloudwatch:PutMetricData
cloudwatch:PutMetricStream
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents
logs:DeleteLogGroup
logs:DeleteLogStream
- AWS Kinesis Firehose:
firehose:CreateDeliveryStream
firehose:DeleteDeliveryStream
firehose:PutRecord
firehose:PutRecordBatch
- IAM:
iam:PassRole
iam:CreateRole
iam:DeleteRole
iam:AttachRolePolicy
iam:DetachRolePolicy
iam:GetRole
iam:CreatePolicy
iam:DeletePolicy
iam:GetPolicy
- Amazon CloudFormation:
cloudformation:CreateStack
cloudformation:DeleteStack
cloudformation:UpdateStack
cloudformation:DescribeStacks
cloudformation:DescribeStackEvents
cloudformation:ListStackResources
2. Create stack in the relevant region
To deploy this project, click the button that matches the region you wish to deploy your stack to:
3. Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioListener | Logz.io listener URL for your region. (For more details, see the regions page. e.g., https://listener.logz.io:8053 | Required |
logzioToken | Your Logz.io metrics shipping token. | Required |
awsNamespaces | Comma-separated list of AWS namespaces to monitor. See this list of namespaces. Use value all-namespaces to automatically add all namespaces. | At least one of awsNamespaces or customNamespace is required |
customNamespace | A custom namespace for CloudWatch metrics. Used to specify a namespace unique to your setup, separate from the standard AWS namespaces. | At least one of awsNamespaces or customNamespace is required |
logzioDestination | Your Logz.io destination URL. Choose the relevant endpoint from the drop down list based on your Logz.io account region. | Required |
httpEndpointDestinationIntervalInSeconds | Buffer time in seconds before Kinesis Data Firehose delivers data. | 60 |
httpEndpointDestinationSizeInMBs | Buffer size in MBs before Kinesis Data Firehose delivers data. | 5 |
debugMode | Enable debug mode for detailed logging (true/false). | false |
4. View your metrics
Allow some time for data ingestion, then open your Logz.io metrics account.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Setup Firehose metrics via Terraform
This setup includes creating a Kinesis Firehose delivery stream, CloudWatch metric stream, and necessary IAM roles.
The setup excludes the Lambda function for adding namespaces, as CloudFormation automatically triggers this during resource creation.
locals {
metrics_namespaces = "CloudWatchSynthetics, AWS/AmazonMQ, AWS/RDS, AWS/DocDB, AWS/ElastiCache"
logzio_token = jsondecode(data.aws_secretsmanager_secret_version.logzio_shipping_credentials.secret_string)["LOGZIO_TOKEN"]
}
resource "aws_kinesis_firehose_delivery_stream" "logzio_delivery_stream" {
name = "logzio-delivery-stream"
destination = "http_endpoint"
http_endpoint_configuration {
url = "https://listener-otlp-aws-metrics-stream-us.logz.io"
name = "logzio_endpoint"
retry_duration = 60
buffering_size = 5
buffering_interval = 60
role_arn = aws_iam_role.firehose_logging_role.arn
s3_backup_mode = "FailedDataOnly"
access_key = local.logzio_token
request_configuration {
content_encoding = "NONE"
}
}
}
resource "aws_cloudwatch_metric_stream" "logzio_metric_stream" {
name = "logzio-metric-stream"
role_arn = aws_iam_role.metrics_stream_role.arn
firehose_arn = aws_kinesis_firehose_delivery_stream.logzio_delivery_stream.arn
output_format = "opentelemetry1.0"
include_filter {
namespace = "AWS/RDS"
}
}
Make sure the
URL
matches your region. View region settings.Replace
LOGZIO_TOKEN
with your Logz.io shipping token.
Next, deploy your Terraform code to set up the Firehose stream and related resources, and verify that metrics are sent correctly to the Logz.io listener endpoint.
The Lambda function logzioMetricStreamAddNamespacesLambda
has been removed from the script as the CloudFormation template automatically triggers it during creation.
For additional namespaces or configurations, adjust the metrics_namespaces
and include_filter
fields as needed.
Send Logs and Traces via centeralized container
1. Create an SSM Parameter to store the OTEL configuration
Go to your AWS System Manager > Parameter Store:
- Set the Name to
logzioOtelConfig.yaml
. - Keep Type as
string
and Data type astext
. - In the Value field, use the following configuration as a starting point, adjusting values as needed for your environment:
receivers:
awsxray:
endpoint: 0.0.0.0:2000
transport: udp
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
fluentforward:
endpoint: 0.0.0.0:24284
processors:
batch:
send_batch_size: 10000
timeout: 1s
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
transform/log_type:
error_mode: ignore
log_statements:
- context: resource
statements:
- set(resource.attributes["type"], "ecs-fargate") where resource.attributes["type"] == nil
exporters:
logzio/traces:
account_token: ${LOGZIO_TRACE_TOKEN}
region: ${LOGZIO_REGION}
logzio/logs:
account_token: ${LOGZIO_LOGS_TOKEN}
region: ${LOGZIO_REGION}
service:
pipelines:
traces:
receivers: [ awsxray, otlp ]
processors: [ batch ]
exporters: [ logzio/traces ]
logs:
receivers: [ fluentforward, otlp ]
processors: [ transform/log_type, batch ]
exporters: [ logzio/logs ]
telemetry:
logs:
level: "info"
Save the new SSM parameter and keep its ARN handy - you’ll need it for the next step.
2. Create IAM Role to allow the ECS task to access the SSM Parameter
Go to IAM > Policies.
Click Create policy, choose the JSON tab under Specify permissions, and paste the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": [
"<ARN_OF_SECRET_PARAMETER_FROM_STEP_1>"
]
}
]
}
Create the policy and give it a name (e.g., LogzioOtelSSMReadAccess
).
Go to IAM > Roles and either:
- Attach the new policy to your existing ECS task role, or
- Create a new IAM role for the ECS task and attach the policy during setup.
If you created a new role, save its ARN — you’ll need it in the next step.
3. Create a ceneralized container
Create a new ECS task for the OpenTelemetry Collector
{
"family": "...",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ecsTaskExecutionRole",
"taskRoleArn": "<TASK_ROLE_ARN_FROM_STEP_2>",
"containerDefinitions": [
{
"name": "otel-collector",
"image": "otel/opentelemetry-collector-contrib",
"cpu": 0,
"portMappings": [
{
"name": "otel-collector-4317",
"hostPort": 4317,
"protocol": "tcp",
"containerPort": 4317,
"appProtocol": "grpc"
},
{
"name": "otel-collector-4318",
"hostPort": 4318,
"protocol": "tcp"
"containerPort": 4318,
}
],
"essential": false,
"command": [
"--config",
"env:OTEL_CONFIG"
],
"environment": [
{
"name": "LOGZIO_LOGS_TOKEN",
"value": "${LOGZIO_LOGS_TOKEN}"
},
{
"name": "LOGZIO_METRICS_TOKEN",
"value": "${LOGZIO_METRICS_TOKEN}"
},
{
"name": "LOGZIO_TRACE_TOKEN",
"value": "${LOGZIO_TRACE_TOKEN}"
},
{
"name": "LOGZIO_REGION",
"value": "${LOGZIO_REGION}"
},
{
"name": "LOGZIO_LISTENER",
"value": "${LOGZIO_LISTENER}"
},
]
"secrets": [
{
"name": "OTEL_CONFIG",
"valueFrom": "logzioOtelConfig.yaml"
}
],
// Optional: Use this to keep logs for debugging and troubleshooting
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/otel-collector",
"awslogs-region": "<AWS-REGION>",
"awslogs-stream-prefix": "ecs"
}
}
},
"firelensConfiguration": {
"type": "fluentbit"
}
]
}
When enabling debugging, you may need to create a CloudWatch log group for your OpenTelemetry Collector.
You can do this via the AWS Console or using the AWS CLI:
aws logs create-log-group --log-group-name /ecs/otel-collector
4. Enable FireLens for Logs and Traces
To enable telemetry collection, you’ll need to add a FireLens log configuration to each relevant container in your ECS task definition.
For Logs
For every container you want to collect logs from, add the following logConfiguration
block to its task definition:
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "opentelemetry",
"Host": "<CENTRAL_COLLECTOR_HOST_OR_IP>",
"Port": "24284",
"TLS": "off"
}
},
For Traces
For each application you want to collect traces from, configure the instrumentation to send trace data to the centralized OpenTelemetry Collector container as its endpoint.