Skip to main content

AWS ECS Fargate

Amazon ECS Fargate is a serverless compute engine for containers that lets you run tasks without managing servers. Use it to send logs, metrics, and traces to Logz.io to monitor your data.

Send Logs, Metrics, and Traces via OpenTelemetry as side car

1. Set Up the SSM Parameter

Go to your AWS System Manager > Parameter Store to create an SSM Parameter to store the OTEL configuration:

  • Set the Name to logzioOtelConfig.yaml.
  • Keep Type as string and Data type as text.
  • In the Value field, use the following configuration as a starting point, adjusting values as needed for your environment:
receivers:
awsxray:
endpoint: '0.0.0.0:2000'
transport: udp
otlp:
protocols:
grpc:
endpoint: '0.0.0.0:4317'
http:
endpoint: '0.0.0.0:4318'
awsecscontainermetrics: null
fluentforward:
endpoint: 'unix:///var/run/fluent.sock'
processors:
batch:
send_batch_size: 10000
timeout: 1s
tail_sampling:
policies:
- name: policy-errors
type: status_code
status_code:
status_codes:
- ERROR
- name: policy-slow
type: latency
latency:
threshold_ms: 1000
- name: policy-random-ok
type: probabilistic
probabilistic:
sampling_percentage: 10
transform/log_type:
error_mode: ignore
log_statements:
- context: resource
statements:
- set(resource.attributes["type"], "ecs-fargate") where resource.attributes["type"] == nil
exporters:
logzio/traces:
account_token: '${LOGZIO_TRACE_TOKEN}'
region: '${LOGZIO_REGION}'
prometheusremotewrite:
endpoint: '${LOGZIO_LISTENER}'
external_labels:
aws_env: ecs-fargate
headers:
Authorization: 'Bearer ${LOGZIO_METRICS_TOKEN}'
resource_to_telemetry_conversion:
enabled: true
target_info:
enabled: false
logzio/logs:
account_token: '${LOGZIO_LOGS_TOKEN}'
region: '${LOGZIO_REGION}'
service:
pipelines:
traces:
receivers:
- awsxray
- otlp
processors:
- batch
exporters:
- logzio/traces
metrics:
receivers:
- otlp
- awsecscontainermetrics
processors:
- batch
exporters:
- prometheusremotewrite
logs:
receivers:
- fluentforward
- otlp
processors:
- transform/log_type
- batch
exporters:
- logzio/logs
telemetry:
logs:
level: info

Save the new SSM parameter and keep its ARN handy - you’ll need it for the next step.

2. Grant ECS Task Access to SSM

Create an IAM role to allow the ECS task to access the SSM Parameter.

Go to IAM > Policies.

Click Create policy, choose the JSON tab under Specify permissions, and paste the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": [
"<ARN_OF_SECRET_PARAMETER_FROM_STEP_1>"
]
}
]
}

Create the policy and give it a name (e.g., LogzioOtelSSMReadAccess).

Go to IAM > Roles and either:

  • Attach the new policy to your existing ECS task role, or
  • Create a new IAM role for the ECS task and attach the policy during setup.

If you created a new role, save its ARN — you’ll need it in the next step.

3. Add OpenTelemetry to Task Definition

Update your existing ECS tasks to include the OpenTelemetry Collector by:

  • Adding a FireLens log configuration to any of your existing task definition
  • Adding a sidecar container running the OpenTelemetry Collector
{
"family": "<YOUR_WANTED_TASK_DEFINITION_FAMILY_NAME>",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ecsTaskExecutionRole",
"taskRoleArn": "<TASK_ROLE_ARN_FROM_STEP_2>",
"containerDefinitions": [
{
… <Existing container definition>,
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "opentelemetry"
}
}
},
{
"name": "otel-collector",
"image": "otel/opentelemetry-collector-contrib",
"cpu": 0,
"essential": false,
"command": [
"--config",
"env:OTEL_CONFIG"
],
"environment": [
{
"name": "LOGZIO_LOGS_TOKEN",
"value": "${LOGZIO_LOGS_TOKEN}"
},
{
"name": "LOGZIO_METRICS_TOKEN",
"value": "${LOGZIO_METRICS_TOKEN}"
},
{
"name": "LOGZIO_TRACE_TOKEN",
"value": "${LOGZIO_TRACE_TOKEN}"
},
{
"name": "LOGZIO_REGION",
"value": "${LOGZIO_REGION}"
},
{
"name": "LOGZIO_LISTENER",
"value": "${LOGZIO_LISTENER}"
},
],
"secrets": [
{
"name": "OTEL_CONFIG",
"valueFrom": "logzioOtelConfig.yaml"
}
],
// Optional: Use this to keep logs for debugging and troubleshooting
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/otel-collector",
"awslogs-region": "<AWS-REGION>",
"awslogs-stream-prefix": "ecs"
}
}
},
"firelensConfiguration": {
"type": "fluentbit"
}
]
}
note

If you’d like to use a centralized log collection setup instead of the OpenTelemetry Collector, reach out to Logz.io Support or your Customer Success Manager for guidance.

Send Logs and Metrics via Kineses Firehose

This project deploys instrumentation that allows shipping Cloudwatch logs to Logz.io, with a Firehose Delivery Stream. It uses a Cloudformation template to create a Stack that deploys:

  • Firehose Delivery Stream with Logz.io as the stream's destination.
  • Lambda function that adds Subscription Filters to Cloudwatch Log Groups, as defined by user's input.
  • Roles, log groups, and other resources that are necessary for this instrumentation.
note

If you want to send logs from specific log groups, use customLogGroups instead of services. Since specifying services will automatically send all logs from those services, regardless of any custom log groups you define.

Auto Deploy via CloudFormation

To deploy this project, click the button that matches the region you wish to deploy your stack to:

RegionDeployment
us-east-1Deploy to AWS
us-east-2Deploy to AWS
us-west-1Deploy to AWS
us-west-2Deploy to AWS
eu-central-1Deploy to AWS
eu-central-2Deploy to AWS
eu-north-1Deploy to AWS
eu-west-1Deploy to AWS
eu-west-2Deploy to AWS
eu-west-3Deploy to AWS
eu-south-1Deploy to AWS
eu-south-2Deploy to AWS
sa-east-1Deploy to AWS
ap-northeast-1Deploy to AWS
ap-northeast-2Deploy to AWS
ap-northeast-3Deploy to AWS
ap-south-1Deploy to AWS
ap-south-2Deploy to AWS
ap-southeast-1Deploy to AWS
ap-southeast-2Deploy to AWS
ap-southeast-3Deploy to AWS
ap-southeast-4Deploy to AWS
ap-east-1Deploy to AWS
ca-central-1Deploy to AWS
ca-west-1Deploy to AWS
af-south-1Deploy to AWS
me-south-1Deploy to AWS
me-central-1Deploy to AWS
il-central-1Deploy to AWS

Set Stack Parameters

Specify the stack details as per the table below, check the checkboxes and select Create stack.

ParameterDescriptionRequired/Default
logzioTokenThe token of the account you want to ship logs to.Required
logzioListenerListener host.Required
logzioTypeThe log type you'll use with this Lambda. This can be a built-in log type, or a custom log type.logzio_firehose
servicesA comma-seperated list of services you want to collect logs from. Supported options are: apigateway, rds, cloudhsm, cloudtrail, codebuild, connect, elasticbeanstalk, ecs, eks, aws-glue, aws-iot, lambda, macie, amazon-mq.-
customLogGroupsA comma-separated list of custom log groups to collect logs from, or the ARN of the Secret parameter (explanation below) storing the log groups list if it exceeds 4096 characters. Note: You can also specify a prefix of the log group names by using a wildcard at the end (e.g., prefix*). This will match all log groups that start with the specified prefix.-
useCustomLogGroupsFromSecretIf you want to provide list of customLogGroups which exceeds 4096 characters, set to true and configure your customLogGroups as defined below.false
triggerLambdaTimeoutThe amount of seconds that Lambda allows a function to run before stopping it, for the trigger function.60
triggerLambdaMemoryTrigger function's allocated CPU proportional to the memory configured, in MB.512
triggerLambdaLogLevelLog level for the Lambda function. Can be one of: debug, info, warn, error, fatal, panicinfo
httpEndpointDestinationIntervalInSecondsThe length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination60
httpEndpointDestinationSizeInMBsThe size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination5
caution

AWS limits every log group to have up to 2 subscription filters. If your chosen log group already has 2 subscription filters, the trigger function won't be able to add another one.

Handle Large Log Group Lists

If your customLogGroups list exceeds the 4096 characters limit, follow the below steps:

  1. Open AWS Secret Manager
  2. Click Store a new secret
    • Choose Other type of secret
    • For key use logzioCustomLogGroups
    • In value store your comma-separated custom log groups list
    • Name your secret, for example as LogzioCustomLogGroups
    • Copy the new secret's ARN
  3. In your stack, Set:
    • customLogGroups to your secret ARN that you copied in step 2
    • useCustomLogGroupsFromSecret to true

Send Logs and Traces via centeralized container

1. Create an SSM Parameter to store the OTEL configuration

Go to your AWS System Manager > Parameter Store:

  • Set the Name to logzioOtelConfig.yaml.
  • Keep Type as string and Data type as text.
  • In the Value field, use the following configuration as a starting point, adjusting values as needed for your environment:
receivers:
awsxray:
endpoint: 0.0.0.0:2000
transport: udp
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
fluentforward:
endpoint: 0.0.0.0:24284

processors:
batch:
send_batch_size: 10000
timeout: 1s
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
transform/log_type:
error_mode: ignore
log_statements:
- context: resource
statements:
- set(resource.attributes["type"], "ecs-fargate") where resource.attributes["type"] == nil

exporters:
logzio/traces:
account_token: ${LOGZIO_TRACE_TOKEN}
region: ${LOGZIO_REGION}
logzio/logs:
account_token: ${LOGZIO_LOGS_TOKEN}
region: ${LOGZIO_REGION}

service:
pipelines:
traces:
receivers: [ awsxray, otlp ]
processors: [ batch ]
exporters: [ logzio/traces ]
logs:
receivers: [ fluentforward, otlp ]
processors: [ transform/log_type, batch ]
exporters: [ logzio/logs ]
telemetry:
logs:
level: "info"

Save the new SSM parameter and keep its ARN handy - you’ll need it for the next step.

2. Create IAM Role to allow the ECS task to access the SSM Parameter

Go to IAM > Policies.

Click Create policy, choose the JSON tab under Specify permissions, and paste the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": [
"<ARN_OF_SECRET_PARAMETER_FROM_STEP_1>"
]
}
]
}

Create the policy and give it a name (e.g., LogzioOtelSSMReadAccess).

Go to IAM > Roles and either:

  • Attach the new policy to your existing ECS task role, or
  • Create a new IAM role for the ECS task and attach the policy during setup.

If you created a new role, save its ARN — you’ll need it in the next step.

3. Create a ceneralized container

Create a new ECS task for the OpenTelemetry Collector

{
"family": "...",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/ecsTaskExecutionRole",
"taskRoleArn": "<TASK_ROLE_ARN_FROM_STEP_2>",
"containerDefinitions": [
{
"name": "otel-collector",
"image": "otel/opentelemetry-collector-contrib",
"cpu": 0,
"portMappings": [
{
"name": "otel-collector-4317",
"hostPort": 4317,
"protocol": "tcp",
"containerPort": 4317,
"appProtocol": "grpc"
},
{
"name": "otel-collector-4318",
"hostPort": 4318,
"protocol": "tcp"
"containerPort": 4318,
}
],
"essential": false,
"command": [
"--config",
"env:OTEL_CONFIG"
],
"environment": [
{
"name": "LOGZIO_LOGS_TOKEN",
"value": "${LOGZIO_LOGS_TOKEN}"
},
{
"name": "LOGZIO_METRICS_TOKEN",
"value": "${LOGZIO_METRICS_TOKEN}"
},
{
"name": "LOGZIO_TRACE_TOKEN",
"value": "${LOGZIO_TRACE_TOKEN}"
},
{
"name": "LOGZIO_REGION",
"value": "${LOGZIO_REGION}"
},
{
"name": "LOGZIO_LISTENER",
"value": "${LOGZIO_LISTENER}"
},
]
"secrets": [
{
"name": "OTEL_CONFIG",
"valueFrom": "logzioOtelConfig.yaml"
}
],
// Optional: Use this to keep logs for debugging and troubleshooting
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/otel-collector",
"awslogs-region": "<AWS-REGION>",
"awslogs-stream-prefix": "ecs"
}
}
},
"firelensConfiguration": {
"type": "fluentbit"
}
]
}

When enabling debugging, you may need to create a CloudWatch log group for your OpenTelemetry Collector.

You can do this via the AWS Console or using the AWS CLI:

aws logs create-log-group --log-group-name /ecs/otel-collector

4. Enable FireLens for Logs and Traces

To enable telemetry collection, you’ll need to add a FireLens log configuration to each relevant container in your ECS task definition.

For Logs

For every container you want to collect logs from, add the following logConfiguration block to its task definition:

 "logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "opentelemetry",
"Host": "<CENTRAL_COLLECTOR_HOST_OR_IP>",
"Port": "24284",
"TLS": "off"
}
},

For Traces

For each application you want to collect traces from, configure the instrumentation to send trace data to the centralized OpenTelemetry Collector container as its endpoint.