AWS RDS
Logs
Deploying logzio-mysql-logs directly via Docker
Before you begin, you'll need:
- MySQL database hosted on Amazon RDS
- An active Logz.io account
Pull Docker image
docker pull logzio/mysql-logs
Run the container
docker run -d --name logzio-mysql-logs -e LOGZIO_TOKEN=<<LOG-SHIPPING-TOKEN>> [-e LOGZIO_LISTENER=<<LISTENER-HOST>>] \
-e RDS_IDENTIFIER=<<YOUR_DB_IDENTIFIER>> [-e AWS_ACCESS_KEY=<<YOUR_ACCESS_KEY>>] [-e AWS_SECRET_KEY=<<YOUR_SECRET_KEY>>] [-e AWS_REGION=<<YOUR_REGION>>] \
[-e RDS_ERROR_LOG_FILE=<<PATH-TO-ERROR-LOG-FILE>>] [-e RDS_SLOW_LOG_FILE=<<PATH-TO-SLOW-LOG-FILE>>] [-e RDS_LOG_FILE=<<PATH-TO-LOG-FILE>>] \
-v path_to_directory:/var/log/logzio -v path_to_directory:/var/log/mysql \
logzio/mysql-logs:latest
docker run -d --name logzio-mysql-logs \
-e LOGZIO_TOKEN="<<LOG-SHIPPING-TOKEN>>" \
-e LOGZIO_LISTENER_HOST="<<LISTENER-HOST>>" \
-v /var/log/logzio:/var/log/logzio \
-v /var/log/mysql:/var/log/mysql \
logzio/mysql-logs:latest
Parameters
Parameter | Description | Required/Default |
---|---|---|
<<LOG-SHIPPING-TOKEN>> | Your Logz.io account token. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
<<LISTENER-HOST>> | Your Logz.io account listener URL. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | Required. Default: listener.logz.io |
<<YOUR_DB_IDENTIFIER>> | The RDS identifier of the host from which you want to read logs from. | Required |
<<YOUR_ACCESS_KEY>> | A proper AMI credentials for RDS logs access (permissions for download-db-log-file-portion and describe-db-log-files are needed). | Optional |
<<YOUR_SECRET_KEY>> | A proper AMI credentials for RDS logs access (permissions for download-db-log-file-portion and describe-db-log-files are needed). | Optional |
<<YOUR_REGION>> | Your AWS region | Optional. us-east-1 |
<<PATH-TO-ERROR-LOG-FILE>> | The path to the RDS error log file. | Optional. error/mysql-error.log |
<<PATH-TO-SLOW-LOG-FILE>> | The path to the RDS slow query log file. | Optional. slowquery/mysql-slowquery.log |
<<PATH-TO-LOG-FILE>> | The path to the RDS general log file. | Optional. general/mysql-general.log |
Below is an example configuration for running the Docker container:
docker run -d \
--name logzio-mysql-logs \
-e LOGZIO_TOKEN=<<LOG-SHIPPING-TOKEN>> \
-e AWS_ACCESS_KEY=<<YOUR_ACCESS_KEY>> \
-e AWS_SECRET_KEY=<<YOUR_SECRET_KEY>> \
-e AWS_REGION=<<YOUR_REGION>> \
-e RDS_IDENTIFIER=<<YOUR_DB_IDENTIFIER>> \
-e RDS_ERROR_LOG_FILE=error/mysql-error.log \
-e RDS_SLOW_LOG_FILE=slowquery/mysql-slowquery.log \
-e RDS_LOG_FILE=general/mysql-general.log \
-v /var/log/logzio:/var/log/logzio \
-v /var/log/mysql:/var/log/mysql \
logzio/mysql-logs:latest
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Open Search Dashboards.
If you still don't see your logs, see log shipping troubleshooting.
Deploying logzio-mysql-logs directly via Kubernetes
Before you begin, you'll need:
- MySQL database hosted on Amazon RDS
- Destination port 5015 open on your firewall for outgoing traffic.
- An active Logz.io account
This is a basic deployment. If you need to apply advanced configurations, adjust and edit the deployment accordingly.
Create monitoring namespace
If you don't already have a monitoring namespace in your cluster, create one using the following command:
kubectl create namespace monitoring
The logzio-mysql-logs
will be deployed under this namespace.
Store your credentials
Save your Logz.io shipping credentials as a Kubernetes secret using the following command:
kubectl create secret generic logzio-logs-secret -n kube-system \
--from-literal=logzio-logs-shipping-token='<<LOG-SHIPPING-TOKEN>>' \
--from-literal=logzio-logs-listener='<<LISTENER-HOST>>' \
--from-literal=rds-identifier='<<RDS-IDENTIFIER>>' \
# Uncomment the lines below if you wish to insert any of the following variables:
#--from-literal=aws-access-key='<<AWS-ACCESS-KEY>>' \
#--from-literal=aws-secret-key='<<AWS-SECRET-KEY>>' \
#--from-literal=rds-error-log-file='<<RDS-ERROR-LOG-FILE-PATH>>' \
#--from-literal=rds-slow-log-file='<<RDS-SLOW-LOG-FILE-PATH>>' \
#--from-literal=rds-log-file='<<RDS-LOG-FILE-PATH>>' \
-n monitoring
If you're deploying to EKS cluster, and it has the appropriate IAM role permissions, you don't need to specify your AWS keys.
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
Parameter | Description | Required/Default |
---|---|---|
logzio-logs-shipping-token | Your Logz.io account token. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
logzio-logs-listener | Listener URL. Replace <<LISTENER-HOST>> with the host for your region. | Required. Default: listener.logz.io |
rds-identifier | The RDS identifier of the host from which you want to read logs from. | Required |
aws-access-key | A proper AMI credentials for RDS logs access (permissions for download-db-log-file-portion and describe-db-log-files are needed). | Optional |
aws-secret-key | A proper AMI credentials for RDS logs access (permissions for download-db-log-file-portion and describe-db-log-files are needed). | Optional |
rds-error-log-file | The path to the RDS error log file. | Optional. error/mysql-error.log |
rds-slow-log-file | The path to the RDS slow query log file. | Optional. slowquery/mysql-slowquery.log |
rds-log-file | The path to the RDS general log file. | Optional. general/mysql-general.log |
Deploy
Run the following command:
kubectl apply -f https://raw.githubusercontent.com/logzio/logzio-mysql-logs/master/k8s/logzio-deployment.yaml
If you chose to use one of the optional parameters in the previous step, you'll have to edit the deployment file - download it, and uncomment the environment variables that you wish to use.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Open Search Dashboards.
If you still don’t see your logs, see log shipping troubleshooting.
For a much easier and more efficient way to collect and send metrics, consider using the Logz.io telemetry collector.
Metrics
Deploy this integration to send your Amazon RDS metrics to Logz.io.
This integration creates a Kinesis Data Firehose delivery stream that links to your Amazon RDS metrics stream and then sends the metrics to your Logz.io account. It also creates a Lambda function that adds AWS namespaces to the metric stream, and a Lambda function that collects and ships the resources' tags.
Install the pre-built dashboard to enhance the observability of your metrics.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.
Before you begin, you'll need:
- An active Logz.io account
Configure AWS to forward metrics to Logz.io
1. Set the required minimum IAM permissions
configured the minimum required IAM permissions as follows:
- Amazon S3:
s3:CreateBucket
s3:DeleteBucket
s3:PutObject
s3:GetObject
s3:DeleteObject
s3:ListBucket
s3:AbortMultipartUpload
s3:GetBucketLocation
- AWS Lambda:
lambda:CreateFunction
lambda:DeleteFunction
lambda:InvokeFunction
lambda:GetFunction
lambda:UpdateFunctionCode
lambda:UpdateFunctionConfiguration
lambda:AddPermission
lambda:RemovePermission
lambda:ListFunctions
- Amazon CloudWatch:
cloudwatch:PutMetricData
cloudwatch:PutMetricStream
logs:CreateLogGroup
logs:CreateLogStream
logs:PutLogEvents
logs:DeleteLogGroup
logs:DeleteLogStream
- AWS Kinesis Firehose:
firehose:CreateDeliveryStream
firehose:DeleteDeliveryStream
firehose:PutRecord
firehose:PutRecordBatch
- IAM:
iam:PassRole
iam:CreateRole
iam:DeleteRole
iam:AttachRolePolicy
iam:DetachRolePolicy
iam:GetRole
iam:CreatePolicy
iam:DeletePolicy
iam:GetPolicy
- Amazon CloudFormation:
cloudformation:CreateStack
cloudformation:DeleteStack
cloudformation:UpdateStack
cloudformation:DescribeStacks
cloudformation:DescribeStackEvents
cloudformation:ListStackResources
2. Create stack in the relevant region
To deploy this project, click the button that matches the region you wish to deploy your stack to:
3. Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioListener | Logz.io listener URL for your region. (For more details, see the regions page. e.g., https://listener.logz.io:8053 | Required |
logzioToken | Your Logz.io metrics shipping token. | Required |
awsNamespaces | Comma-separated list of AWS namespaces to monitor. See this list of namespaces. Use value all-namespaces to automatically add all namespaces. | At least one of awsNamespaces or customNamespace is required |
customNamespace | A custom namespace for CloudWatch metrics. Used to specify a namespace unique to your setup, separate from the standard AWS namespaces. | At least one of awsNamespaces or customNamespace is required |
logzioDestination | Your Logz.io destination URL. Choose the relevant endpoint from the drop down list based on your Logz.io account region. | Required |
httpEndpointDestinationIntervalInSeconds | Buffer time in seconds before Kinesis Data Firehose delivers data. | 60 |
httpEndpointDestinationSizeInMBs | Buffer size in MBs before Kinesis Data Firehose delivers data. | 5 |
debugMode | Enable debug mode for detailed logging (true/false). | false |
4. View your metrics
Allow some time for data ingestion, then open your Logz.io metrics account.
Install the pre-built dashboard to enhance the observability of your metrics.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.