Some AWS services can be configured to ship their logs to an S3 bucket, where Logz.io can fetch those logs directly.
The S3 API does not allow retrieval of object timestamps, so Logz.io must collect logs in alphabetical order. Please keep these notes in mind when configuring logging.
Make the prefix as specific as possible
The prefix is the part of your log path that remains constant across all logs. This can include folder structure and the beginning of the filename.
The log path after the prefix must come in alphabetical order
We recommend starting the object name (after the prefix) with the Unix epoch time. The Unix epoch time is always increasing, ensuring we can always fetch your incoming logs.
The size of each log file should not exceed 50 MB
To guarantee successful file upload, make sure that the size of each log file does not exceed 50 MB.
You can add your buckets directly from Logz.io by providing your S3 credentials and configuration.
Configure Logz.io to fetch logs from an S3 bucket
Before you begin, you’ll need:
s3:GetObject permissions for the required S3 bucket
Add a new S3 bucket using the dedicated Logz.io configuration wizard
Log into the app to use the dedicated Logz.io configuration wizard and add a new S3 bucket.
- Click + Add a bucket
- Select your preferred method of authentication - an IAM role or access keys.
The configuration wizard will open.
- Select the hosting region from the dropdown list.
- Provide the S3 bucket name
- Optional You have the option to add a prefix.
- Save your information.
Logz.io fetches logs that are generated after configuring an S3 bucket. Logz.io cannot fetch old logs retroactively.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.