MongoDB
Logs
Fluentd is an open source data collector and a great option because of its flexibility. This integration lets you send logs from your MongoDB instances to your Logz.io account using Fluentd.
Before you begin, you'll need:
- MongoDB installed on your host
- Ruby 2.4+ and Ruby-dev
Configure MongoDB to write logs to a file
In the configuration file of your MongoDB instance, set the database to write logs to a file. You can skip this step if this has already been configured on your MongoDB host.
To do this, add the following to your MongoDB configuration file:
systemLog:
destination: file
path: "<<MONGODB-FILE-PATH>>"
logAppend: true
- Replace
<<MONGODB-FILE-PATH>>
with the path to the log file, for example,/var/log/mongodb/mongod.log
.
Make sure Fluend can read from the MongoDB log file. You can set this as follows:
On MacOS and Linux
sudo chmod 604 <<MONGODB-FILE-PATH>>
- Replace
<<MONGODB-FILE-PATH>>
with the path to the MongoDB log file.
On Windows
Enable the read access in the file properties.
Install Fluentd
gem install fluentd
fluentd --setup ./fluent
This command creates a directory called fluent
where we will create the configuration file and Gemfile.
Create a Gemfile for Fluentd
In your preffered directory, create a Gemfile with the following content:
source "https://rubygems.org"
# You can use fixed version of Fluentd and its plugins
# Add plugins you want to use
gem "fluent-plugin-logzio", "0.0.21"
gem "fluent-plugin-record-modifier"
Configure Fluentd with Logz.io output
Add this code block to your Fluent configuration file (fluent.conf
by default).
See the configuration parameters below the code block.👇
# To ignore fluentd logs
<label @FLUENT_LOG>
<match fluent.*>
@type null
</match>
</label>
# Tailing mongodb logs
<source>
@type tail
@id mongodb_logs
path <<MONGODB-FILE-PATH>>
# If you're running on windows, change the pos_file to a Windows path
pos_file /var/log/fluentd-mongodb.log.pos
tag logzio.mongodb.*
read_from_head true
<parse>
@type json
</parse>
</source>
# Parsing the logs
<filter logzio.mongodb.**>
@type record_modifier
<record>
type mongodb-fluentd
message ${record["msg"]}
mongodb_timestamp ${record["t"]["$date"]}
log_id ${record["id"].to_s}
</record>
remove_keys msg,t,id
</filter>
# Sending logs to Logz.io
<match logzio.mongodb.**>
@type logzio_buffered
endpoint_url https://<<LISTENER-HOST>>:8071?token=<<LOGZIO-SHIPPING-TOKEN>>
output_include_time true
output_include_tags true
http_idle_timeout 10
<buffer>
@type memory
flush_thread_count 4
flush_interval 3s
chunk_limit_size 16m # Logz.io bulk limit is decoupled from chunk_limit_size. Set whatever you want.
queue_limit_length 4096
</buffer>
</match>
Parameters
Parameter | Description |
---|---|
<<LOGZIO-SHIPPING-TOKEN>> | Your Logz.io log shipping token directs the data securely to your Logz.io Log Management account. The default token is auto-populated in the examples when you're logged into the Logz.io app as an Admin. Manage your tokens. |
<<LISTENER-HOST>> | Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. |
<<MONGODB-FILE-PATH>> | Path to the log file of your MongoDB. |
endpoint_url | A url composed of your Logz.io region's listener URL, account token, and log type. Replace <<LISTENER-HOST>> with the host for your region. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. |
output_include_time | To add a timestamp to your logs when they're processed, true (recommended). Otherwise, false . |
output_include_tags | To add the fluentd tag to logs, true . Otherwise, false . If true , use in combination with output_tags_fieldname . |
output_tags_fieldname | If output_include_tags is true , sets output tag's field name. The default is fluentd_tag |
http_idle_timeout | Time, in seconds, that the HTTP connection will stay open without traffic before timing out. |
Run Fluentd
fluentd -c ./fluent.conf --gemfile ./Gemfile
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Open Search Dashboards. You can search for type:mongodb-fluentd
to filter for your MongoDB logs. Your logs should be already parsed thanks to the Logz.io preconfigured parsing pipeline.
If you still don't see your logs, see log shipping troubleshooting.
Metrics
To send your Prometheus-format MongoDB metrics to Logz.io, you need to add the inputs.mongodb and outputs.http plug-ins to your Telegraf configuration file.
Configure Telegraf to send your metrics data to Logz.io
Set up Telegraf v1.17 or higher
For Windows
wget https://dl.influxdata.com/telegraf/releases/telegraf-1.27.3_windows_amd64.zip
After downloading the archive, extract its content into C:\Program Files\Logzio\telegraf\
.
The configuration file is located at C:\Program Files\Logzio\telegraf\
.
For MacOS
brew install telegraf
The configuration file is located at /usr/local/etc/telegraf.conf
.
For Linux
Ubuntu & Debian
sudo apt-get update && sudo apt-get install telegraf
The configuration file is located at /etc/telegraf/telegraf.conf
.
RedHat and CentOS
sudo yum install telegraf
The configuration file is located at /etc/telegraf/telegraf.conf
.
SLES & openSUSE
# add go repository
zypper ar -f obs://devel:languages:go/ go
# install latest telegraf
zypper in telegraf
The configuration file is located at /etc/telegraf/telegraf.conf
.
FreeBSD/PC-BSD
sudo pkg install telegraf
The configuration file is located at /etc/telegraf/telegraf.conf
.
Add the inputs.mongodb plug-in
First you need to configure the input plug-in to enable Telegraf to scrape the MongoDB data from your hosts. To do this, add the following code to the configuration file:
[[inputs.mongodb]]
servers = ["mongodb://<<USER-NAME>>:<<PASSWORD>>@<<ADDRESS>>:<<PORT>>"]
## An array of URLs of the form:
## "mongodb://" [user ":" pass "@"] host [ ":" port]
## For example:
## mongodb://user:auth_key@10.10.3.30:27017,
## mongodb://10.10.3.33:18832,
## servers = ["mongodb://127.0.0.1:27017,10.10.3.33:18832,10.10.5.55:6565"]
gather_cluster_status = true
gather_perdb_stats = true
gather_col_stats = true
- Replace
<<USER-NAME>>
with the user name for your MongoDB database. - Replace
<<PASSWORD>>
with the password for your MongoDB database. - Replace
<<ADDRESS>>
with the address of your MongoDB database host. This islocalhost
if installed locally. - Replace
<<PORT>>
with the address of your host port allocated to MongoDB database.
The full list of data scraping and configuring options can be found here.
Add the outputs.http plug-in
After you create the configuration file, configure the output plug-in to enable Telegraf to send your data to Logz.io in Prometheus-format. To do this, add the following code to the configuration file:
[[outputs.http]]
url = "https://<<LISTENER-HOST>>:8053"
data_format = "prometheusremotewrite"
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"
X-Prometheus-Remote-Write-Version = "0.1.0"
Authorization = "Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"
Replace the placeholders to match your specifics. (They are indicated by the double angle brackets << >>
):
- Replace
<<LISTENER-HOST>>
with the Logz.io Listener URL for your region, configured to use port 8052 for http traffic, or port 8053 for https traffic. - Replace
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>>
with a token for the Metrics account you want to ship to. Look up your Metrics token.
Start Telegraf
On Windows:
telegraf.exe --service start
On MacOS:
telegraf --config telegraf.conf
On Linux:
Linux (sysvinit and upstart installations)
sudo service telegraf start
Linux (systemd installations)
systemctl start telegraf
Check Logz.io for your metrics
Install the pre-built dashboard to enhance the observability of your metrics.
To view the metrics on the main dashboard, log in to your Logz.io Metrics account, and open the Logz.io Metrics tab.