Skip to main content

AWS Amplify


For a much easier and more efficient way to collect and send telemetry, consider using the telemetry collector.


Deploy this integration to forward your AWS Amplify logs to using AWS Firehose

Auto-deploy the Stack in the relevant region

This integration will deploy a Firehose connection with your AWS services to forward logs to To deploy this project, click the button that matches the region you wish to deploy your Stack to:

us-east-1Deploy to AWS
us-east-2Deploy to AWS
us-west-1Deploy to AWS
us-west-2Deploy to AWS
eu-central-1Deploy to AWS
eu-north-1Deploy to AWS
eu-west-1Deploy to AWS
eu-west-2Deploy to AWS
eu-west-3Deploy to AWS
sa-east-1Deploy to AWS
ap-northeast-1Deploy to AWS
ap-northeast-2Deploy to AWS
ap-northeast-3Deploy to AWS
ap-south-1Deploy to AWS
ap-southeast-1Deploy to AWS
ap-southeast-2Deploy to AWS
ca-central-1Deploy to AWS

Specify stack details

Specify the stack details as per the table below, check the checkboxes and select Create stack.

logzioTokenThe token of the account you want to ship logs to.Required
logzioListenerListener host.Required
logzioTypeThe log type you'll use with this Lambda. This can be a built-in log type, or a custom log type.logzio_firehose
servicesA comma-seperated list of services you want to collect logs from. Supported options are: apigateway, rds, cloudhsm, cloudtrail, codebuild, connect, elasticbeanstalk, ecs, eks, aws-glue, aws-iot, lambda, macie, amazon-mq.-
customLogGroupsA comma-seperated list of custom log groups you want to collect logs from-
triggerLambdaTimeoutThe amount of seconds that Lambda allows a function to run before stopping it, for the trigger function.60
triggerLambdaMemoryTrigger function's allocated CPU proportional to the memory configured, in MB.512
triggerLambdaLogLevelLog level for the Lambda function. Can be one of: debug, info, warn, error, fatal, panicinfo
httpEndpointDestinationIntervalInSecondsThe length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination60
httpEndpointDestinationSizeInMBsThe size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination5

AWS limits every log group to have up to 2 subscription filters. If your chosen log group already has 2 subscription filters, the trigger function won't be able to add another one.

Send logs

Give the stack a few minutes to be deployed.

Once new logs are added to your chosen log group, they will be sent to your account.


If you've used the services field, you'll have to wait 6 minutes before creating new log groups for your chosen services. This is due to cold start and custom resource invocation, that can cause the Lambda to behave unexpectedly.

Check for your logs

Give your logs some time to get from your system to ours, and then open Open Search Dashboards.

If you still don't see your logs, see log shipping troubleshooting.

This is an AWS Lambda function that collects Amplify access logs and sends them to in bulk over HTTP.

Configuration with a Lambda function

Create a new Lambda function

  1. Open the AWS Lambda Console, and click Create function.
  2. Choose Author from scratch.
  3. In Name, add the log type to the name of the function.
  4. In Runtime, choose Python 3.9.
  5. Click Create Function.

After a few moments, you'll see configuration options for your Lambda function. You'll need this page later on, so keep it open.

Zip the source files

Clone the CloudWatch Logs Shipper - Lambda project from GitHub to your computer, and zip the Python files in the src/ folder as follows:

git clone \
&& cd logzio_aws_serverless/python3/amplify/ \
&& mkdir -p dist/python3/shipper; cp -r ../shipper/ dist/python3/shipper; mkdir -p dist/python3/custom_logger; cp -r ../custom_logger/ dist/python3/custom_logger \
&& cp src/ dist \
&& cd dist/ \
&& zip logzio-amplify python3/shipper/* python3/custom_logger/*

Upload the zip file and set environment variables

  1. In the Code source section, select Upload from > .zip file.
  2. Click Upload, and choose the zip file you created earlier (
  3. Click Save.
  4. Navigate to Configuration > Environment variables.
  5. Click Edit.
  6. Click Add environment variable.
  7. Fill in the Key and Value fields for each variable as per the table below:
TOKEN (Required)The token of the account you want to ship to.
LISTENER_URL (Required)Determines protocol, listener host, and port. For example, https://<<LISTENER-HOST>>:8071.
Replace <<LISTENER-HOST>> with your region's listener host (for example, Use port 8070 for HTTP or 8071 for HTTPS. For more information on finding your account's region, see Account region .
AMPLIFY_DOMAIN (Required)Amplify domain URL can be found in the Amplify admin dashboard in General under Production branch URL.
TYPE (Default: logzio_amplify_access_lambda)The log type you'll use with this Lambda. This can be a type that supports default parsing, or a custom log type.
You'll need to create a new Lambda for each log type you use.
AMPLIFY_APP_ID (Required)You can find the app ID in your Amplify admin dashboard in General under the App ARN field arn:aws:amplify:REGION:AWS_ID:apps/APP_ID.
TIMEOUTPeriod in minutes, over which the Lambda function fetches Amplify logs.

Set the EventBridge (CloudWatch Events) trigger

  1. Find the Add triggers list (left side of the Designer panel) and choose EventBridge (CloudWatch Events) from this list.

  2. If you don't have a pre-defined schedule type (e.g., 1min), click Create new rule in Rule.

  3. In Rule name, enter a name to uniquely identify your rule.

  4. In Rule description, if required, provide an optional description for your rule.

  5. In Rule type, choose the schedule expression that is equal to the TIMEOUT of the environment variable (e.g., 5 minutes).

Update Permissions for Lambda Function

  1. Go to Configuration in your Lambda function and select the Permissions tab.
  2. Click the role name shown in the example lambda-basic. It will redirect you to the IAM> Roles> lambda-basic.
  3. On the role page inside the Permissions tab, select the dropdown Add permissions and click on Create inline policy. It will redirect you to the Create Policy page.
  4. On the Create Policy page, select the JSON tab.
  5. Fill in JSON with your parameters as follows:
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": "arn:aws:amplify:AWS_REGION:XXX66029XXXX:apps/XXXXdn0mprXXXX/accesslogs/*"
  • Replace AWS_REGION with the AWS region of your Amplify App (e.g.,us-west-2).
  • Replace XXX66029XXXX with your AWS Account ID.
  • Replace XXXXdn0mprXXXX with the AWS Amplify App ID.

Check for your logs

Give your logs some time to get from your system to ours, and then open Open Search Dashboards.

If you still don't see your logs, see log shipping troubleshooting.