Skip to main content

Prometheus Alerts Migrator

This Helm chart deploys the Prometheus Alerts Migrator as a Kubernetes controller, which automates the migration of Prometheus alert rules to Logz.io's alert format, facilitating monitoring and alert management in a Logz.io integrated environment.

Prerequisites

  • Helm 3+
  • Kubernetes 1.19+
  • Logz.io account with API access

Installing the Chart

To install the chart with the release name logzio-prometheus-alerts-migrator:

helm install \
--set config.rulesConfigMapAnnotation="prometheus.io/kube-rules" \
--set config.alerManagerConfigMapAnnotation="prometheus.io/kube-alertmanager" \
--set config.logzioAPIToken="your-logzio-api-token" \
--set config.logzioAPIURL="https://api.logz.io/" \
--set config.rulesDS="<<logzio_metrics_data_source_name>>" \
--set config.env_id="<<env_id>>" \
--set config.workerCount=2 \
logzio-prometheus-alerts-migrator logzio-helm/prometheus-alerts-migrator

Configuration

The following table lists the configurable parameters of the Prometheus Alerts Migrator chart and their default values.

ParameterDescriptionDefault
replicaCountNumber of controller replicas1
image.repositoryContainer image repositorylogzio/prometheus-alerts-migrator
image.pullPolicyContainer image pull policyIfNotPresent
image.tagContainer image tagv1.0.0
serviceAccount.createSpecifies whether a service account should be createdtrue
serviceAccount.nameThe name of the service account to use""
config.rulesConfigMapAnnotationConfigMap annotation for rulesprometheus.io/kube-rules
config.alertManagerConfigMapAnnotationConfigMap annotation for alert manager configurationprometheus.io/kube-alertmanager
config.logzioAPITokenLogz.io API token""
config.logzioAPIURLLogz.io API URLhttps://api.logz.io/
config.rulesDSData source for rulesIntegrationsTeamTesting_metrics
config.env_idEnvironment IDmy-env-yotam
config.workerCountNumber of workers2
config.ignoreSlackTextIgnore slack contact points title fieldfalse
config.ignoreSlackTitleIgnore slack contact points text fieldfalse
rbac.rulesCustom rules for the Kubernetes cluster role[{apiGroups: [""], resources: ["configmaps"], verbs: ["get", "list", "watch"]}]

Secret Management

The chart can optionally create a Kubernetes Secret to store sensitive information like the Logz.io API token. You can control the creation and naming of this Secret through the following configurations in the values.yaml file.

ParameterDescriptionDefault
secret.createDetermines whether a Secret should be created by the Helm chart.true
secret.nameSpecifies the name of the Secret to be used.logzio-api-token

Using an Existing Secret

By default, the chart will create a Secret named logzio-api-token. You can change the name by setting secret.name to your preferred name. If you enable Secret creation, make sure to provide the actual token value in the values.yaml or via the --set flag.

helm install \
--set logzioAPIToken=your-logzio-api-token \
logzio-prometheus-alerts-migrator logzio-helm/prometheus-alerts-migrator

If you prefer to manage the Secret outside of the Helm chart (e.g., for security reasons or to use an existing Secret), set secret.enabled to false and provide the name of your existing Secret in secret.name.

Example of disabling Secret creation and using an existing Secret:

helm install \
--set secret.enabled=false \
--set secret.name=my-existing-secret \
logzio-prometheus-alerts-migrator logzio-helm/prometheus-alerts-migrator

In this case, ensure that your existing Secret my-existing-secret contains the necessary key (token in this context) with the appropriate value (Logz.io API token).

Rules configMap Format

The controller is designed to process ConfigMaps containing Prometheus alert rules and Prometheus alert manager configuration. These ConfigMaps must be annotated with a specific key that matches the value of the RULES_CONFIGMAP_ANNOTATION or ALERTMANAGER_CONFIGMAP_ANNOTATION environment variables for the controller to process them.

Example rules ConfigMap

Below is an example of how a rules configMap should be structured:

apiVersion: v1
kind: ConfigMap
metadata:
name: logzio-rules
namespace: monitoring
annotations:
prometheus.io/kube-rules: "true"
data:
all_instances_down_otel_collector: |
alert: Opentelemetry_Collector_Down
expr: sum(up{app="opentelemetry-collector", job="kubernetes-pods"}) == 0
for: 5m
labels:
team: sre
severity: major
annotations:
description: "The OpenTelemetry collector has been down for more than 5 minutes."
summary: "Instance down"
  • Replace prometheus.io/kube-rules with the actual annotation you use to identify relevant ConfigMaps. The data section should contain your Prometheus alert rules in YAML format.
  • Deploy the ConfigMap to your cluster using kubectl apply -f <configmap-file>.yml.

Below is an example of how an alert manager ConfigMap should be structured:

apiVersion: v1
kind: ConfigMap
metadata:
name: logzio-alertmanager
namespace: monitoring
annotations:
prometheus.io/kube-alertmanager: "true"
data:
alertmanager.yml: |
global:
# Global configurations, adjust these to your SMTP server details
smtp_smarthost: 'smtp.example.com:587'
smtp_from: 'alertmanager@example.com'
smtp_auth_username: 'alertmanager'
smtp_auth_password: 'password'
# The root route on which each incoming alert enters.
route:
receiver: 'default-receiver'
group_by: ['alertname', 'env']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
# Child routes
routes:
- match:
env: production
receiver: 'slack-production'
continue: true
- match:
env: staging
receiver: 'slack-staging'
continue: true
# Receivers define ways to send notifications about alerts.
receivers:
- name: 'default-receiver'
email_configs:
- to: 'alerts@example.com'
- name: 'slack-production'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T00000000/B00000000/'
channel: '#prod-alerts'
- name: 'slack-staging'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T00000000/B11111111/'
channel: '#staging-alerts'
  • Replace prometheus.io/kube-alertmanager with the actual annotation you use to identify relevant ConfigMaps. The data section should contain your Prometheus alert manager configuration in YAML format.

  • Deploy the ConfigMap to your cluster using kubectl apply -f <configmap-file>.yml.