This integration is currently released as a beta version.
AWS Control Tower is a tool to control a top-level summary of policies applied to the AWS environment. This integration sends logs from S3 buckets that the AWS Control Tower automatically creates in your AWS environment.
Configuration
Deploy an S3 Hook Lambda function
The stacks must be deployed in the same region as the S3 buckets.
This stack sends logs as they get added to the bucket. To start, click the button that matches the region you wish to deploy your Stack to:
Create new stack
To deploy this project, click the button that matches the region you wish to deploy your Stack to:
Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioListener |
The Logz.io listener URL for your region. (For more details, see the regions page | Required |
logzioToken |
Your Logz.io log shipping token. | Required |
logLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic . |
Default: info |
logType |
The log type you’ll use with this Lambda. This is shown in your logs under the type field in OpenSearch Dashboards. Logz.io applies parsing based on the log type. | Default: s3_hook |
pathsRegexes |
Comma-seperated list of regexes that match the paths you’d like to pull logs from. | - |
pathToFields |
Fields from the path to your logs directory that you want to add to the logs. For example, org-id/aws-type/account-id will add each of the fields ord-id , aws-type and account-id to the logs that are fetched from the directory that this path refers to. |
- |
Add trigger
Give the stack a few minutes to be deployed.
Once your Lambda function is ready, you’ll need to manually add a trigger. This is due to Cloudformation limitations.
Go to the function’s page, and click on Add trigger.
Then, choose S3 as a trigger, and fill in:
- Bucket: Your bucket name.
- Event type: Choose option
All object create events
. - Prefix and Suffix should be left empty.
Confirm the checkbox, and click *Add.
Send logs
That’s it. Your function is configured. Once you upload new files to your bucket, it will trigger the function, and the logs will be sent to your Logz.io account.
Parsing
S3 Hook will automatically parse logs in the following cases:
- The object’s path contains the phrase
cloudtrail
(case insensitive).
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open OpenSearch Dashboards.
If you still don’t see your logs, see log shipping troubleshooting.
Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
bucketName |
Name of the bucket you wish to fetch logs from. Will be used for IAM policy. | Required |
logzioListener |
The Logz.io listener URL for your region. (For more details, see the regions page | Required |
logzioToken |
Your Logz.io log shipping token. | Required |
logLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic . |
Default: info |
logType |
The log type you’ll use with this Lambda. This is shown in your logs under the type field in Open Search Dashboards. Logz.io applies parsing based on the log type. | Default: s3_hook |
pathsRegexes |
Comma-seperated list of regexes that match the paths you’d like to pull logs from. | - |
pathToFields |
Fields from the path to your logs directory that you want to add to the logs. For example, org-id/aws-type/account-id will add each of the fields ord-id , aws-type and account-id to the logs that are fetched from the directory that this path refers to. See Advanced settings for more on this. |
- |
Add trigger
Give the stack a few minutes to be deployed.
Once your Lambda function is ready, you’ll need to manually add a trigger. This is due to Cloudformation limitations.
Go to the function’s page, and click on Add trigger.
Then, choose S3 as a trigger, and fill in:
- Bucket: Your bucket name.
- Event type: Choose option
All object create events
. - Prefix and Suffix should be left empty.
Confirm the checkbox, and click *Add.
Deploy the Control Tower stack
This stack creates a Lambda function, an EventBridge rule and IAM roles to automatically add triggers to the S3 Hook Lambda function as the Control Tower is creating new buckets.
The stacks must be deployed in the same region as the S3 buckets.
To deploy this project, click the button that matches the region you wish to deploy your Stack to:
Specify stack details
Specify the stack details as per the table below, check the checkboxes, and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic . |
Default: info |
s3HookArn |
The ARN of your S3 Hook Lambda function. | Required |
It can take a few minutes after the stack creation for EventBridge rule to be triggered.
If want to delete the S3 Hook Stack - you’ll need to detach the policy “LambdaAccessBuckets” first.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Open Search Dashboards.
If you still don’t see your logs, see log shipping troubleshooting.
Advanced settings
Automatic parsing
S3 Hook will automatically parse logs in the following cases:
- The object’s path contains the phrase
cloudtrail
(case insensitive).
Filtering files
If there are specific paths within the bucket that you want to pull logs from, you can use the pathsRegex
variable.
This variable should hold a comma-seperated list of regexes that match the paths you wish to extract logs from.
Note: This will still trigger your Lambda function every time a new object is added to your bucket. However, if the key does not match the regexes, the function will quit and won’t send the logs.
Adding object path as logs field
In case you want to use your objects’ path as extra fields in your logs, you can do so by using pathToFields
.
For example, if your objects are under the path: oi-3rfEFA4/AWSLogs/2378194514/file.log
, where oi-3rfEFA4
is org id, AWSLogs
is aws type, and 2378194514
is account id.
Setting pathToFields
with the value: org-id/aws-type/account-id
will add to logs the following fields:
org-id: oi-3rfEFA4
, aws-type: AWSLogs
, account-id: 2378194514
.
If you use pathToFields
, you need to add a value for each subfolder in the path. Otherwise there will be a mismatch and the logs will be sent without fields.
This will override a field with the same key, if it exists.
In order for the feature to work, you need to set pathToFields
from the root of the bucket.