HTTP
- Bulk uploads over HTTP/HTTPS
- Bulk uploads over TCP
- Protobuf via OpenTelemetry
To ship logs directly to the Logz.io listener, send them as minified JSON files over an HTTP/HTTPS connection.
Request path and header
For HTTPS (recommended):
https://<<LISTENER-HOST>>:8071?token=<<LOG-SHIPPING-TOKEN>>&type=<<MY-TYPE>>
For HTTP:
http://<<LISTENER-HOST>>:8070?token=<<LOG-SHIPPING-TOKEN>>&type=<<MY-TYPE>>
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
Replace
<<LOG-SHIPPING-TOKEN>>
with the token of the account you want to ship to.Replace
<<LISTENER-HOST>>
with the host for your region.Declare your log type for parsing purposes. Logz.io applies default parsing pipelines to the following list of built-in log types. If you declare another type, contact support for assistance with custom parsing. Can't contain spaces. Otherwise, the default
type
ishttp-bulk
.
Request body
The request body is a list of logs in minified JSON format, with each log separated by a newline (\n)
.
Example:
{"message": "Hello there", "counter": 1}
{"message": "Hello again", "counter": 2}
Limitations
- Max body size: 10 MB (10,485,760 bytes).
- Max log line size: 500,000 bytes.
- Type field in the log overrides the
type
in the request header.
For example:
echo $'{"message":"hello there", "counter": 1}\n{"message":"hello again", "counter": 2}' \
| curl -X POST "http://<<LISTENER-HOST>>:8070?token=<<LOG-SHIPPING-TOKEN>>&type=test_http_bulk" \
-H "user-agent:logzio-json-logs" \
-v --data-binary @-
Possible responses
200 OK
All logs received and validated. Allow some time for data ingestion, then open Logz.io Log Management account.
The response body is empty.
400 BAD REQUEST
Invalid input. Response example:
{
"malformedLines": 2, #The number of log lines that aren't valid JSON
"successfulLines": 10, #The number of valid JSON log lines received
"oversizedLines": 3, #The number of log lines that exceeded the line length limit
"emptyLogLines": 4 #The number of empty log lines
}
401 UNAUTHORIZED
Missing or invalid token query string parameter. Ensure you're using the correct account token.
Response: "Logging token is missing" or "Logging token is not valid".
413 REQUEST ENTITY TOO LARGE
Request body size exceeds 10 MB.
To ship logs directly to the Logz.io listener, send them as minified JSON files over an HTTP/HTTPS connection.
JSON log structure
Follow these practices when shipping JSON logs over TCP:
- Each log must be a single-line JSON object.
- Each log line must be 500,000 bytes or less.
- Each log line must be followed by a
\n
(even the last log). - Include your account token as a top-level property:
{ ... "token": "<<LOG-SHIPPING-TOKEN>>" , ... }
.
Send TLS/SSL streams over TCP
Download the Logz.io public certificate
For HTTPS shipping, download the Logz.io public certificate to your certificate authority folder.
sudo curl https://raw.githubusercontent.com/logzio/public-certificates/master/AAACertificateServices.crt --create-dirs -o /etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt
Send the logs
Using the downloaded certificate, send logs to TCP port 5052 on your region's listener host. For details on finding your account's region, refer to the Account region section.
Check Logz.io for your logs
Allow some time for data ingestion, then open Open Search Dashboards.
Encounter an issue? See our log shipping troubleshooting guide.
This guide provides step-by-step instructions to Logz.io users on how to send logs in Protobuf format using the OTLP listener. Follow these steps to set up your environment and send logs via the OTLP protocol using the protocurl tool.
Download protocurl
protocurl
is a tool based on curl and Protobuf, designed for working with Protobuf-encoded requests over HTTP. Follow the instructions on the protocurl
GitHub page to download and install the tool on your machine.
Once installed, verify the installation with:
protocurl --version
Download OpenTelemetry Protobuf Definitions
Download the OpenTelemetry Protobuf definitions from the OpenTelemetry-proto GitHub repository.
You need the .proto
files to compile Protobuf messages and send logs. Download the repository to a local folder, for example:
git clone <https://github.com/open-telemetry/opentelemetry-proto.git> ~/Downloads/proto/opentelemetry-proto
Prepare the Command
Once protocurl
is installed and the OpenTelemetry Protobuf files are downloaded, you can start sending logs.
Here is the basic structure of the command:
protocurl -v \\
-I ~/Downloads/proto/opentelemetry-proto \\
-i opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest \\
-o opentelemetry.proto.collector.logs.v1.ExportLogsServiceResponse \\
-u '<https://otlp-listener.logz.io/v1/logs>' \\
-H 'Authorization: Bearer <Logzio-Token-Logs>' \\
-H 'user-agent: logzio-protobuf-logs' \\
-d @export_logs_request.json
Breakdown:
I
: Points to the location of the OpenTelemetry Protobuf definitions.i
: Specifies the Protobuf request type (ExportLogsServiceRequest
).o
: Specifies the Protobuf response type (ExportLogsServiceResponse
).u
: URL of the Logz.io OTLP listener endpoint. Adjust the URL for your region using Logz.io region settings.H
: Include headers like the Authorization token and user-agent.d
: Specifies the JSON file containing the log data.
Prepare the JSON Data
You need to create the export_logs_request.json
file, which contains the structure of the log data to be sent to the OTLP listener. The required fields in the log request are as follows:
{
"resourceLogs": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": { "stringValue": "example-service" }
}
]
},
"scopeLogs": [
{
"scope": {
"name": "example-scope"
},
"logRecords": [
{
"timeUnixNano": "<timestamp>", // e.g., 1727270794000000000
"severityNumber": "SEVERITY_NUMBER_INFO",
"severityText": "INFO",
"body": { "stringValue": "Log message here" }
}
]
}
]
}
]
}
Key fields:
timeUnixNano
: A required field that represents the timestamp in nanoseconds since epoch (e.g.,1727270794000000000
for a future time). This needs to be manually set, but you can automate it in future feature requests.severityNumber
: Log severity level (e.g.,SEVERITY_NUMBER_INFO
).body
: The log message content.
Sample Output in Console
When you run the command, you should see an output like this in your console:
=========================== POST Request JSON =========================== >>>
{
"resource_logs": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": {"string_value": "example-service"}
}
]
},
"scope_logs": [
{
"scope": {
"name": "example-scope"
},
"log_records": [
{
"time_unix_nano": "1727065212000000000",
"severity_number": "SEVERITY_NUMBER_INFO",
"severity_text": "INFO",
"body": {"string_value": "Log message here"}
}
]
}
]
}
]
}
=========================== POST Request Binary =========================== >>>
00000000 0a 5c 0a 23 0a 21 0a 0c 73 65 72 76 69 63 65 2e |.\\.#.!..service.|
00000010 6e 61 6d 65 12 11 0a 0f 65 78 61 6d 70 6c 65 2d |name....example-|
...
=========================== POST Response Headers =========================== <<<
HTTP/1.1 200 OK
If everything is set correctly, you should see an HTTP status code 200 OK
, indicating the logs were successfully sent.