Skip to main content

MySQL

Logs

Default configuration

Before you begin, you'll need:

Configure MySQL to write general query logs

In the MySQL configuration file (/etc/mysql/my.cnf), paste these lines:

general_log_file = /var/log/mysql/mysql.log
general_log= 1
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
log-queries-not-using-indexes = 1

Restart MySQL:

sudo /etc/init.d/mysql restart

Download the Logz.io public certificate

For HTTPS shipping, download the Logz.io public certificate to your certificate authority folder.

sudo curl https://raw.githubusercontent.com/logzio/public-certificates/master/AAACertificateServices.crt --create-dirs -o /etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt

Add MySQL as an input in your Filebeat configuration

In the Filebeat configuration file (/etc/filebeat/filebeat.yml), add MySQL to the filebeat.inputs section.

Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to.

note

Filebeat requires a file extension specified for the log input.

# ...
filebeat.inputs:
- type: filestream
paths:
- /var/log/mysql/mysql.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql
fields_under_root: true
encoding: utf-8
ignore_older: 3h

- type: filestream
paths:
- /var/log/mysql/mysql-slow.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql_slow_query
fields_under_root: true
encoding: utf-8
ignore_older: 3h
multiline:
pattern: '^# Time:'
negate: true
match: after

- type: filestream
paths:
- /var/log/mysql/error.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql_error
fields_under_root: true
encoding: utf-8
ignore_older: 3h

If you're running Filebeat 7 to 8.1, paste the code block below instead:

# ...
filebeat.inputs:
- type: log
paths:
- /var/log/mysql/mysql.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql
fields_under_root: true
encoding: utf-8
ignore_older: 3h

- type: log
paths:
- /var/log/mysql/mysql-slow.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql_slow_query
fields_under_root: true
encoding: utf-8
ignore_older: 3h
multiline:
pattern: '^# Time:'
negate: true
match: after

- type: log
paths:
- /var/log/mysql/error.log

fields:
logzio_codec: plain

# You can manage your tokens at
# https://app.logz.io/#/dashboard/settings/manage-tokens/log-shipping
token: <<LOG-SHIPPING-TOKEN>>
type: mysql_error
fields_under_root: true
encoding: utf-8
ignore_older: 3h
Preconfigured log types
ParameterLog TypeDefault log location
General query logmysql/var/log/mysql/mysql.log
Slow query logmysql_slow_query/var/log/mysql/mysql-slow.log
Error logmysql_error/var/log/mysql/error.log

The log type is used to apply the appropriate Logz.io preconfigured parsing pipeline so that your logs will be automatically parsed.

Set Logz.io as the output

If Logz.io is not an output, add it now. Remove all other outputs.

Replace <<LISTENER-HOST>> with the host for your region.

# ...
output.logstash:
hosts: ["<<LISTENER-HOST>>:5015"]
ssl:
certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']

Start Filebeat

Start or restart Filebeat for the changes to take effect.

Check Logz.io for your logs

Give your logs some time to get from your system to ours, and then open Open Search Dashboards.

If you still don't see your logs, see Filebeat troubleshooting.

Set up a Docker sidecar for MySQL

MySQL is an open-source relational database management system. Docker sidecar is a container that runs on the same Pod as the application container. This integration allows you to send your MySQL logs to your Logz.io account using a Docker sidecar.

Pull the Docker image

Download the logzio/mysql-logs image:

docker pull logzio/mysql-logs

Run the Docker image

For a complete list of options, see the parameters below the code block.👇

docker run -d --name logzio-mysql-logs \
-e LOGZIO_TOKEN="<<LOG-SHIPPING-TOKEN>>" \
-e LOGZIO_LISTENER_HOST="<<LISTENER-HOST>>" \
-v /var/log/logzio:/var/log/logzio \
-v /var/log/mysql:/var/log/mysql \
logzio/mysql-logs:latest
Parameters
ParameterDescriptionRequired/Default
LOGZIO_TOKENYour Logz.io log shipping token directs the data securely to your Logz.io Log Management account. The default token is auto-populated in the examples when you're logged into the Logz.io app as an Admin. Manage your tokens.Required
LOGZIO_LISTENERListener URL. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071.listener.logz.io
MYSQL_ERROR_LOG_FILEThe path to MySQL error log.Optional. /var/log/mysql/error.log
MYSQL_SLOW_LOG_FILEThe path to MySQL slow query log.Optional. /var/log/mysql/mysql-slow.log
MYSQL_LOG_FILEThe path to MySQL general log.Optional. /var/log/mysql/mysql.log

Below is an example configuration for running the Docker container:

docker run -d \
--name logzio-mysql-logs \
-e LOGZIO_TOKEN="<<LOG-SHIPPING-TOKEN>>" \
-v /path/to/directory/logzio:/var/log/logzio \
-v /path/to/directory/mysql:/var/log/mysql \
--restart=always \
logzio/mysql-logs:latest

Check Logz.io for your logs

Give your logs some time to get from your system to ours, and then open Open Search Dashboards.

If you still don't see your logs, see Filebeat troubleshooting.

Metrics

To send your Prometheus-format MySQL metrics to Logz.io, you need to add the inputs.mysql and outputs.http plug-ins to your Telegraf configuration file.

Configure Telegraf to send your metrics data to Logz.io

Set up Telegraf v1.17 or higher

For Windows

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.27.3_windows_amd64.zip

After downloading the archive, extract its content into C:\Program Files\Logzio\telegraf\.

The configuration file is located at C:\Program Files\Logzio\telegraf\.

For MacOS

brew install telegraf

The configuration file is located at /usr/local/etc/telegraf.conf.

For Linux

Ubuntu & Debian

sudo apt-get update && sudo apt-get install telegraf

The configuration file is located at /etc/telegraf/telegraf.conf.

RedHat and CentOS

sudo yum install telegraf

The configuration file is located at /etc/telegraf/telegraf.conf.

SLES & openSUSE

# add go repository
zypper ar -f obs://devel:languages:go/ go
# install latest telegraf
zypper in telegraf

The configuration file is located at /etc/telegraf/telegraf.conf.

FreeBSD/PC-BSD

sudo pkg install telegraf

The configuration file is located at /etc/telegraf/telegraf.conf.

Add the inputs.mysql plug-in

First you need to configure the input plug-in to enable Telegraf to scrape the MySQL data from your hosts. To do this, add the following code to the configuration file:

[[inputs.mysql]]
servers = ["<<USER-NAME>>:<<PASSWORD>>@<<PROTOCOL>>(<<ADDRESS>>)/?tls=false"]
## e.g.
## servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
## servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
metric_version = 2
# gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
gather_table_schema = true

# gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST
gather_process_list = true

# gather user statistics from INFORMATION_SCHEMA.USER_STATISTICS
gather_user_statistics = true

# gather auto_increment columns and max values from information schema
gather_info_schema_auto_inc = true

# gather metrics from INFORMATION_SCHEMA.INNODB_METRICS
gather_innodb_metrics = true

# gather metrics from SHOW SLAVE STATUS command output
gather_slave_status = true

# gather metrics from SHOW BINARY LOGS command output
gather_binary_logs = true

# gather metrics from SHOW GLOBAL VARIABLES command output
gather_global_variables = true

# gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
gather_table_io_waits = true

# gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
gather_table_lock_waits = true

# gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
gather_index_io_waits = true

# gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
gather_event_waits = true

# gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME
gather_file_events_stats = true

# gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST
gather_perf_events_statements = true

# gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_ACCOUNT_BY_EVENT_NAME
gather_perf_sum_per_acc_per_event = true
  • Replace <<USER-NAME>> with the user name for your MySQL database.
  • Replace <<PASSWORD>> with the password for your MySQL database.
  • Replace <<PROTOCOL>> with the name of your shipping protocol (tcp protocol recommended).
  • Replace <<ADDRESS>> with the address of your MySQL database host. This is localhost if installed locally.
note

The full list of data scraping and configuring options can be found here

Add the outputs.http plug-in

After you create the configuration file, configure the output plug-in to enable Telegraf to send your data to Logz.io in Prometheus-format. To do this, add the following code to the configuration file:

[[outputs.http]]
url = "https://<<LISTENER-HOST>>:8053"
data_format = "prometheusremotewrite"
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"
X-Prometheus-Remote-Write-Version = "0.1.0"
Authorization = "Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"

Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>):

  • Replace <<LISTENER-HOST>> with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic.
  • Replace <<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with a token for the Metrics account you want to ship to. Look up your Metrics token.

Start Telegraf

On Windows:
telegraf.exe --service start
On MacOS:
telegraf --config telegraf.conf
On Linux:

Linux (sysvinit and upstart installations)

sudo service telegraf start

Linux (systemd installations)

systemctl start telegraf

Check Logz.io for your metrics

Install the pre-built dashboard to enhance the observability of your metrics.

To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.