With this integration, you can collect logs from an S3 bucket and forward them to Logz.io.
Create new stack
To deploy this project, click the button that matches the region you wish to deploy your Stack to:
Specify template
Keep the default setting in the Create stack screen and select Next.
Specify stack details
Specify the stack details as per the table below and select Next.
Parameter | Description | Required/Default |
---|---|---|
bucketName |
Name of the bucket you wish to fetch logs from. Will be used for IAM policy. | Required |
logzioListener |
The Logz.io listener URL fot your region. (For more details, see the regions page | Required |
logzioToken |
Your Logz.io log shipping token. | Required |
logLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic . |
Default: info |
logType |
The log type you’ll use with this Lambda. This is shown in your logs under the type field in Kibana. Logz.io applies parsing based on type. | Default: s3_hook |
Configure stack options
Specify the Key and Value parameters for the Tags (optional) and select Next.
Review
Confirm that you acknowledge that AWS CloudFormation might create IAM resources and select Create stack.
Add trigger
Give the stack a few minutes to be deployed.
Once your Lambda function is ready, you’ll need to manually add a trigger. This is due to Cloudformation limitations.
Go to the function’s page, and click on Add trigger.
Choose S3 as a trigger, and fill in:
- Bucket: Your bucket name.
- Event type: Choose option
All object create events
. - Prefix and Suffix should be left empty.
Confirm the checkbox, and click Add.
Send logs
That’s it. Your function is configured. Once you upload new files to your bucket, it will trigger the function, and the logs will be sent to your Logz.io account.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.