AWS EC2 Auto Scaling
Amazon EC2 Auto Scaling integration with Logz.io allows you to monitor the reliability, availability, and performance of all your EC2 instances in one place.
Logs
This service integration is specifically designed to work with the destination bucket to which the service writes its logs.
It is based on the service's naming convention and path structure.
If you're looking to ship the service's logs from a different bucket, please use the S3 Bucket shipping method instead.
Before you begin:
If you plan on using an access key to authenticate your connection, you'll need to set the
s3:ListBucket
ands3:GetObject
permissions for the required S3 bucket.If you plan on using an IAM role to authenticate your connection, you can get the role policy by filling out the bucket information and clicking the "Get the role policy" button.
File names in ascending alphanumeric order. This is important because the S3 fetcher's offset is determined by the name of the last file fetched. We recommend using standard AWS naming conventions to determine the file name ordering and to avoid log duplication.
Send your logs to an S3 bucket
Logz.io fetches your CloudTrail logs from an S3 bucket.
For help with setting up a new trail, see Overview for Creating a Trail from AWS.
Verify bucket definition on AWS
Navigate to the location of your trail logs on AWS:
And verify the definition of the bucket is under the CloudTrail path:
Region data must be created under the CloudTrain path BEFORE the S3 bucket is defined on Logz.io. Otherwise, you won't be able to proceed with sending CloudTrail data to Logz.io. :::
Next, note the bucket's name and the way the prefix is constructed, for example:
Bucket name: aws-cloudtrail-logs-486140753397-9f0d7dbd
.
Prefix name: AWSLogs/486140753397/CloudTrail/
.
You'll need these values when adding your S3 bucket information.
Add your S3 bucket information
To use the S3 fetcher, log into your Logz.io account, and go to the CloudTrail log shipping page.
- Click + Add a bucket
- Select your preferred method of authentication - an IAM role or access keys.
The configuration wizard will open.
- Provide the S3 bucket name
- Provide your Prefix. That is your CloudTrail path. See further details below.
- There is no Region selection box because it is not needed. Logz.io will pull data from all regions in AWS for the specified bucket and account.
- Choose whether you want to include the source file path. This saves the path of the file as a field in your log.
- Save your information.
Getting the information from your CloudTrail AWS path
You may need to fill in 2 parameters when creating the bucket - {BUCKET_NAME} and {PREFIX}. You can find them in your CloudTrail AWS path. The AWS path structure for CloudTrail looks like the examle below:
{BUCKET_NAME}/{PREFIX_IF_EXISTS}/cloudtrail/AWSLogs/{AWS_ACCOUNT_ID}/CloudTrail/
{BUCKET_NAME} is your S3 bucket name.
{PREFIX} is your CloudTrail path. The prefix is generate by default and represents the complete path inside the bucket up until the regions section. It should look like this:
AWSLogs/{AWS_ACCOUNT_ID}/CloudTrail/
.
Logz.io fetches logs that are generated after configuring an S3 bucket. Logz.io cannot fetch past logs retroactively.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Open Search Dashboards.
If you still don't see your logs, see log shipping troubleshooting.
Troubleshooting
Problem: Failed to save bucket configuration
The following error appears when you're trying to create a bucket:
AWS failed to create cloudtrail bucket. Exception AWS bucket is empty: 403.
Possible cause
The bucket's location is incorrect or might be missing the correct prefix.
Suggested remedy
- Head to CloudTrail console on AWS and check the relevant trail:
- Verify that the location of the trail is correct:
And verify that the prefix contains all parts:
In this case, the cause of the error is that the location is empty or that the prefix is wrong.
The bucket should be aws-cloudtrail-logs-486140753397-9f0d7dbd
, and the prefix should be AWSLogs/486140753397/CloudTrail/
. You can click on the prefix to verify that it is empty.
Once you fix these issues, you can return to Logz.io to create the CloudTrail bucket.
Metrics
For a much easier and more efficient way to collect and send metrics, consider using the Logz.io telemetry collector.
Deploy this integration to send your Amazon EC2 Auto Scaling metrics to Logz.io.
This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon EC2 Auto Scaling metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Before you begin, you'll need:
- An active Logz.io account
Configure AWS to forward metrics to Logz.io
1. Set the required minimum IAM permissions
configured the minimum required IAM permissions as follows:
- Amazon S3:
s3:CreateBucket
s3:DeleteBucket
s3:PutObject
s3:GetObject
s3:DeleteObject
s3:ListBucket
s3:AbortMultipartUpload
s3:GetBucketLocation
- AWS Lambda:
lambda:CreateFunction
lambda:DeleteFunction
lambda:InvokeFunction
lambda:GetFunction
lambda:UpdateFunctionCode
lambda:UpdateFunctionConfiguration
lambda:AddPermission
lambda:RemovePermission
lambda:ListFunctions
- Amazon CloudWatch:
cloudwatch:PutMetricData
cloudwatch:PutMetricStream
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents
logs:DeleteLogGroup
logs:DeleteLogStream
- AWS Kinesis Firehose:
firehose:CreateDeliveryStream
firehose:DeleteDeliveryStream
firehose:PutRecord
firehose:PutRecordBatch
- IAM:
iam:PassRole
iam:CreateRole
iam:DeleteRole
iam:AttachRolePolicy
iam:DetachRolePolicy
iam:GetRole
iam:CreatePolicy
iam:DeletePolicy
iam:GetPolicy
- Amazon CloudFormation:
cloudformation:CreateStack
cloudformation:DeleteStack
cloudformation:UpdateStack
cloudformation:DescribeStacks
cloudformation:DescribeStackEvents
cloudformation:ListStackResources
2. Create stack in the relevant region
To deploy this project, click the button that matches the region you wish to deploy your stack to:
3. Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioListener | Logz.io listener URL for your region. (For more details, see the regions page. e.g., https://listener.logz.io:8053 | Required |
logzioToken | Your Logz.io metrics shipping token. | Required |
awsNamespaces | Comma-separated list of AWS namespaces to monitor. See this list of namespaces. Use value all-namespaces to automatically add all namespaces. | At least one of awsNamespaces or customNamespace is required |
customNamespace | A custom namespace for CloudWatch metrics. Used to specify a namespace unique to your setup, separate from the standard AWS namespaces. | At least one of awsNamespaces or customNamespace is required |
logzioDestination | Your Logz.io destination URL. Choose the relevant endpoint from the drop down list based on your Logz.io account region. | Required |
httpEndpointDestinationIntervalInSeconds | Buffer time in seconds before Kinesis Data Firehose delivers data. | 60 |
httpEndpointDestinationSizeInMBs | Buffer size in MBs before Kinesis Data Firehose delivers data. | 5 |
debugMode | Enable debug mode for detailed logging (true/false). | false |
4. View your metrics
Allow some time for data ingestion, then open your Logz.io metrics account.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.