Manual configuration with a Lambda function

Create a new Lambda function

This Lambda function will consume a Kinesis data stream and sends the logs to Logz.io in bulk over HTTP.

Open the AWS Lambda Console, and click Create function. Choose Author from scratch, and use this information:

  • Name: We suggest adding the log type to the name, but you can name this function whatever you want.
  • Runtime: Choose Python 3.7
  • Role: Use a role that has AWSLambdaKinesisExecutionRole permissions.

Click Create Function (bottom right corner of the page). After a few moments, you’ll see configuration options for your Lambda function.

You’ll need this page later on, so keep it open.

Download the Kinesis stream shipper

Download the latest Kinesis stream shipper. It is a zip file.

Upload the zip file and set environment variables

In the Function code section of Lambda, find the Code entry type list. Choose Upload a .ZIP file from this list.

Click Upload, and choose the zip file you created earlier (logzio-kinesis.zip).

In the Environment variables section, set your Logz.io account token, URL, and log type, and any other variables that you need to use.

Environment variables
Parameter Description
TOKEN The token of the account you want to ship to.
REGION Two-letter region code, or blank for US East (Northern Virginia). This determines your listener URL (where you’re shipping the logs to) and API URL.
You can find your region code in the Regions and URLs table.
URL (Deprecated) Use REGION instead. Protocol, listener host, and port (for example, https://<<LISTENER-HOST>>:8071).
Replace <<LISTENER-HOST>> with your region’s listener host (for example, listener.logz.io). For more information on finding your account’s region, see Account region.
TYPE kinesis_lambda The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type.
You should create a new Lambda for each log type you use.
FORMAT text json or text. If json, the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields.
COMPRESS false Set to true to compress logs before sending them. Set to false to send uncompressed logs.
MESSAGES_ARRAY (Optional) Name the field containing a JSON array that will be used to split the log document. Learn more. Note: This option only works if the FORMAT is set to json.
Configure the function’s basic settings

In Basic settings, we recommend starting with these settings:

  • Memory: 512 MB
  • Timeout: 1 min 0 sec

These default settings are just a starting point. Check your Lambda usage regularly, and adjust these values if you need to.

Set the Kinesis event trigger

Find the Add triggers list (left side of the Designer panel). Choose Kinesis from this list.

Below the Designer, you’ll see the Configure triggers panel. Choose the Kinesis stream that the Lambda function will watch.

Click Add, and then click Save at the top of the page.

Check Logz.io for your logs

Give your logs some time to get from your system to ours, and then open Kibana.

If you still don’t see your logs, see log shipping troubleshooting.

Automated CloudFormation deployment

Before you begin, you’ll need:

  • AWS CLI
  • An S3 bucket to store the CloudFormation package
Download the Kinesis stream shipper

Download the latest Kinesis stream shipper. It is a zip file.

Create the CloudFormation package and upload to S3

Create the CloudFormation package using the AWS CLI. Replace <<YOUR-S3-BUCKET>> with the S3 bucket name where you’ll be uploading this package.

curl -o sam-template.yaml https://raw.githubusercontent.com/logzio/logzio_aws_serverless/master/python3/kinesis/sam-template.yaml
aws cloudformation package \
  --template sam-template.yaml \
  --output-template-file kinesis-template.output.yaml \
  --s3-bucket <<YOUR-S3-BUCKET>>
Deploy the CloudFormation package

Deploy the CloudFormation package using AWS CLI.

For a complete list of options, see the configuration parameters below the code block. 👇

aws cloudformation deploy \
--template-file $(pwd)/kinesis-template.output.yaml \
--stack-name logzio-kinesis-logs-lambda-stack \
--parameter-overrides \
  LogzioTOKEN='<<SHIPPING-TOKEN>>' \
  KinesisStream='<<KINESIS-STREAM-NAME>>' \
--capabilities "CAPABILITY_IAM"
Parameters
Parameter Description
LogzioTOKEN Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
KinesisStream The name of the Kinesis stream where this function will listen for updates.
LogzioREGION Two-letter region code, or blank for US East (Northern Virginia). This determines your listener URL (where you’re shipping the logs to) and API URL.
You can find your region code in the Regions and URLs table.
LogzioURL (Deprecated) Use LogzioREGION instead. Protocol, listener host, and port (for example, https://<<LISTENER-HOST>>:8071).
The token of the account you want to ship to.
LogzioTYPE kinesis_lambda The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type.
You should create a new Lambda for each log type you use.
LogzioFORMAT text json or text. If json, the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields.
LogzioCOMPRESS false Set to true to compress logs before sending them. Set to false to send uncompressed logs.
KinesisStreamBatchSize 100 The largest number of records to read from your stream at one time.
KinesisStreamStartingPosition LATEST The position in the stream to start reading from. For more information, see ShardIteratorType in the Amazon Kinesis API Reference.
Check Logz.io for your logs

Give your logs some time to get from your system to ours, and then open Kibana.

If you still don’t see your logs, see log shipping troubleshooting.