docs/customizations: Move customization examples to dedicated folder
With the objective of improving our README, customization examples are being moved to a dedicated folder under `docs/`. Signed-off-by: ArthurSens <arthursens2005@gmail.com>
This commit is contained in:
40
docs/customizations/alertmanager-configuration.md
Normal file
40
docs/customizations/alertmanager-configuration.md
Normal file
@@ -0,0 +1,40 @@
|
||||
### Alertmanager configuration
|
||||
|
||||
The Alertmanager configuration is located in the `values.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/alertmanager-config.jsonnet"
|
||||
((import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
alertmanager+: {
|
||||
config: |||
|
||||
global:
|
||||
resolve_timeout: 10m
|
||||
route:
|
||||
group_by: ['job']
|
||||
group_wait: 30s
|
||||
group_interval: 5m
|
||||
repeat_interval: 12h
|
||||
receiver: 'null'
|
||||
routes:
|
||||
- match:
|
||||
alertname: Watchdog
|
||||
receiver: 'null'
|
||||
receivers:
|
||||
- name: 'null'
|
||||
|||,
|
||||
},
|
||||
},
|
||||
}).alertmanager.secret
|
||||
```
|
||||
|
||||
In the above example the configuration has been inlined, but can just as well be an external file imported in jsonnet via the `importstr` function.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/alertmanager-config-external.jsonnet"
|
||||
((import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
alertmanager+: {
|
||||
config: importstr 'alertmanager-config.yaml',
|
||||
},
|
||||
},
|
||||
}).alertmanager.secret
|
||||
```
|
||||
56
docs/customizations/components-name-namespace-overrides.md
Normal file
56
docs/customizations/components-name-namespace-overrides.md
Normal file
@@ -0,0 +1,56 @@
|
||||
### Components' name and namespace overrides
|
||||
|
||||
It is possible to override the namespace where kube-prometheus is going to be deployed, like the example below:
|
||||
|
||||
```jsonnet
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
{
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
If prefered, it can be changed individually by component. It is also possible to change the name of Prometheus and Alertmanager Custom Resources, like shown below:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/name-namespace-overrides.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
{
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
|
||||
prometheus+: {
|
||||
namespace: 'foo',
|
||||
name: 'bar',
|
||||
},
|
||||
|
||||
alertmanager+: {
|
||||
namespace: 'bar',
|
||||
name: 'foo',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
|
||||
// Add the restricted psp to setup
|
||||
{
|
||||
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
|
||||
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
|
||||
} +
|
||||
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
|
||||
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
|
||||
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
|
||||
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
|
||||
```
|
||||
@@ -0,0 +1,558 @@
|
||||
---
|
||||
weight: 650
|
||||
toc: true
|
||||
title: Prometheus Rules and Grafana Dashboards
|
||||
menu:
|
||||
docs:
|
||||
parent: kube
|
||||
lead: Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus
|
||||
images: []
|
||||
draft: false
|
||||
description: Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus
|
||||
date: "2021-03-08T23:04:32+01:00"
|
||||
---
|
||||
|
||||
`kube-prometheus` ships with a set of default [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [Grafana](http://grafana.com/) dashboards. At some point one might like to extend them, the purpose of this document is to explain how to do this.
|
||||
|
||||
All manifests of kube-prometheus are generated using [jsonnet](https://jsonnet.org/) and Prometheus rules and Grafana dashboards in specific follow the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/).
|
||||
|
||||
For both the Prometheus rules and the Grafana dashboards Kubernetes `ConfigMap`s are generated within kube-prometheus. In order to add additional rules and dashboards simply merge them onto the existing json objects. This document illustrates examples for rules as well as dashboards.
|
||||
|
||||
As a basis, all examples in this guide are based on the base example of the kube-prometheus [readme](../../README.md):
|
||||
|
||||
```jsonnet mdox-exec="cat example.jsonnet"
|
||||
local kp =
|
||||
(import 'kube-prometheus/main.libsonnet') +
|
||||
// Uncomment the following imports to enable its patches
|
||||
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
|
||||
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
|
||||
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
|
||||
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
|
||||
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
|
||||
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
|
||||
{
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
|
||||
{
|
||||
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
|
||||
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
|
||||
} +
|
||||
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
|
||||
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
|
||||
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
|
||||
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
|
||||
```
|
||||
|
||||
## Prometheus rules
|
||||
|
||||
### Alerting rules
|
||||
|
||||
According to the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/) Prometheus alerting rules are under the key `prometheusAlerts` in the top level object, so in order to add an additional alerting rule, we can simply merge an extra rule into the existing object.
|
||||
|
||||
The format is exactly the Prometheus format, so there should be no changes necessary should you have existing rules that you want to include.
|
||||
|
||||
> Note that alerts can just as well be included into this file, using the jsonnet `import` function. In this example it is just inlined in order to demonstrate their use in a single file.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/prometheus-additional-alert-rule-example.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
exampleApplication: {
|
||||
prometheusRuleExample: {
|
||||
apiVersion: 'monitoring.coreos.com/v1',
|
||||
kind: 'PrometheusRule',
|
||||
metadata: {
|
||||
name: 'my-prometheus-rule',
|
||||
namespace: $.values.common.namespace,
|
||||
},
|
||||
spec: {
|
||||
groups: [
|
||||
{
|
||||
name: 'example-group',
|
||||
rules: [
|
||||
{
|
||||
alert: 'ExampleAlert',
|
||||
expr: 'vector(1)',
|
||||
labels: {
|
||||
severity: 'warning',
|
||||
},
|
||||
annotations: {
|
||||
description: 'This is an example alert.',
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
|
||||
```
|
||||
|
||||
### Recording rules
|
||||
|
||||
In order to add a recording rule, simply do the same with the `prometheusRules` field.
|
||||
|
||||
> Note that rules can just as well be included into this file, using the jsonnet `import` function. In this example it is just inlined in order to demonstrate their use in a single file.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/prometheus-additional-recording-rule-example.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
exampleApplication: {
|
||||
prometheusRuleExample: {
|
||||
apiVersion: 'monitoring.coreos.com/v1',
|
||||
kind: 'PrometheusRule',
|
||||
metadata: {
|
||||
name: 'my-prometheus-rule',
|
||||
namespace: $.values.common.namespace,
|
||||
},
|
||||
spec: {
|
||||
groups: [
|
||||
{
|
||||
name: 'example-group',
|
||||
rules: [
|
||||
{
|
||||
record: 'some_recording_rule_name',
|
||||
expr: 'vector(1)',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
|
||||
```
|
||||
|
||||
### Pre-rendered rules
|
||||
|
||||
We acknowledge, that users may need to transition existing rules, and therefore allow an option to add additional pre-rendered rules. Luckily the yaml and json formats are very close so the yaml rules just need to be converted to json without any manual interaction needed. Just a tool to convert yaml to json is needed:
|
||||
|
||||
```
|
||||
go get -u -v github.com/brancz/gojsontoyaml
|
||||
```
|
||||
|
||||
And convert the existing rule file:
|
||||
|
||||
```
|
||||
cat existingrule.yaml | gojsontoyaml -yamltojson > existingrule.json
|
||||
```
|
||||
|
||||
Then import it in jsonnet:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/prometheus-additional-rendered-rule-example.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
exampleApplication: {
|
||||
prometheusRuleExample: {
|
||||
apiVersion: 'monitoring.coreos.com/v1',
|
||||
kind: 'PrometheusRule',
|
||||
metadata: {
|
||||
name: 'my-prometheus-rule',
|
||||
namespace: $.values.common.namespace,
|
||||
},
|
||||
spec: {
|
||||
groups: (import 'existingrule.json').groups,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
|
||||
```
|
||||
|
||||
### Changing default rules
|
||||
|
||||
Along with adding additional rules, we give the user the option to filter or adjust the existing rules imported by `kube-prometheus/main.libsonnet`. The recording rules can be found in [kube-prometheus/components/mixin/rules](../../jsonnet/kube-prometheus/components/mixin/rules) and [kubernetes-mixin/rules](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/rules) while the alerting rules can be found in [kube-prometheus/components/mixin/alerts](../../jsonnet/kube-prometheus/components/mixin/alerts) and [kubernetes-mixin/alerts](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/alerts).
|
||||
|
||||
Knowing which rules to change, the user can now use functions from the [Jsonnet standard library](https://jsonnet.org/ref/stdlib.html) to make these changes. Below are examples of both a filter and an adjustment being made to the default rules. These changes can be assigned to a local variable and then added to the `local kp` object as seen in the examples above.
|
||||
|
||||
#### Filter
|
||||
|
||||
Here the alert `KubeStatefulSetReplicasMismatch` is being filtered out of the group `kubernetes-apps`. The default rule can be seen [here](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/apps_alerts.libsonnet). You first need to find out in which component the rule is defined (here it is kuberentesControlPlane).
|
||||
|
||||
```jsonnet
|
||||
local filter = {
|
||||
kubernetesControlPlane+: {
|
||||
prometheusRule+: {
|
||||
spec+: {
|
||||
groups: std.map(
|
||||
function(group)
|
||||
if group.name == 'kubernetes-apps' then
|
||||
group {
|
||||
rules: std.filter(
|
||||
function(rule)
|
||||
rule.alert != 'KubeStatefulSetReplicasMismatch',
|
||||
group.rules
|
||||
),
|
||||
}
|
||||
else
|
||||
group,
|
||||
super.groups
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
#### Adjustment
|
||||
|
||||
Here the expression for another alert in the same component is updated from its previous value. The default rule can be seen [here](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/apps_alerts.libsonnet).
|
||||
|
||||
```jsonnet
|
||||
local update = {
|
||||
kubernetesControlPlane+: {
|
||||
prometheusRule+: {
|
||||
spec+: {
|
||||
groups: std.map(
|
||||
function(group)
|
||||
if group.name == 'kubernetes-apps' then
|
||||
group {
|
||||
rules: std.map(
|
||||
function(rule)
|
||||
if rule.alert == 'KubePodCrashLooping' then
|
||||
rule {
|
||||
expr: 'rate(kube_pod_container_status_restarts_total{namespace=kube-system,job="kube-state-metrics"}[10m]) * 60 * 5 > 0',
|
||||
}
|
||||
else
|
||||
rule,
|
||||
group.rules
|
||||
),
|
||||
}
|
||||
else
|
||||
group,
|
||||
super.groups
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
Using the example from above about adding in pre-rendered rules, the new local variables can be added in as follows:
|
||||
|
||||
```jsonnet
|
||||
local add = {
|
||||
exampleApplication:: {
|
||||
prometheusRule+: {
|
||||
apiVersion: 'monitoring.coreos.com/v1',
|
||||
kind: 'PrometheusRule',
|
||||
metadata: {
|
||||
name: 'example-application-rules',
|
||||
namespace: $.values.common.namespace,
|
||||
},
|
||||
spec: (import 'existingrule.json'),
|
||||
},
|
||||
},
|
||||
};
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + filter + update + add;
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
filter +
|
||||
update +
|
||||
add + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
};
|
||||
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
|
||||
{
|
||||
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
|
||||
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
|
||||
} +
|
||||
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
|
||||
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
|
||||
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
|
||||
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } +
|
||||
{ ['exampleApplication-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
|
||||
```
|
||||
|
||||
## Dashboards
|
||||
|
||||
Dashboards can either be added using jsonnet or simply a pre-rendered json dashboard.
|
||||
|
||||
### Jsonnet dashboard
|
||||
|
||||
We recommend using the [grafonnet](https://github.com/grafana/grafonnet-lib/) library for jsonnet, which gives you a simple DSL to generate Grafana dashboards. Following the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/) additional dashboards are added to the `grafanaDashboards` key, located in the top level object. To add new jsonnet dashboards, simply add one.
|
||||
|
||||
> Note that dashboards can just as well be included into this file, using the jsonnet `import` function. In this example it is just inlined in order to demonstrate their use in a single file.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/grafana-additional-jsonnet-dashboard-example.jsonnet"
|
||||
local grafana = import 'grafonnet/grafana.libsonnet';
|
||||
local dashboard = grafana.dashboard;
|
||||
local row = grafana.row;
|
||||
local prometheus = grafana.prometheus;
|
||||
local template = grafana.template;
|
||||
local graphPanel = grafana.graphPanel;
|
||||
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+:: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
grafana+: {
|
||||
dashboards+:: {
|
||||
'my-dashboard.json':
|
||||
dashboard.new('My Dashboard')
|
||||
.addTemplate(
|
||||
{
|
||||
current: {
|
||||
text: 'Prometheus',
|
||||
value: 'Prometheus',
|
||||
},
|
||||
hide: 0,
|
||||
label: null,
|
||||
name: 'datasource',
|
||||
options: [],
|
||||
query: 'prometheus',
|
||||
refresh: 1,
|
||||
regex: '',
|
||||
type: 'datasource',
|
||||
},
|
||||
)
|
||||
.addRow(
|
||||
row.new()
|
||||
.addPanel(graphPanel.new('My Panel', span=6, datasource='$datasource')
|
||||
.addTarget(prometheus.target('vector(1)')))
|
||||
),
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
### Pre-rendered Grafana dashboards
|
||||
|
||||
As jsonnet is a superset of json, the jsonnet `import` function can be used to include Grafana dashboard json blobs. In this example we are importing a [provided example dashboard](../../examples/example-grafana-dashboard.json).
|
||||
|
||||
```jsonnet mdox-exec="cat examples/grafana-additional-rendered-dashboard-example.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+:: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
grafana+: {
|
||||
dashboards+:: { // use this method to import your dashboards to Grafana
|
||||
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
In case you have lots of json dashboard exported out from grafana UI the above approach is going to take lots of time to improve performance we can use `rawDashboards` field and provide it's value as json string by using `importstr`
|
||||
|
||||
```jsonnet mdox-exec="cat examples/grafana-additional-rendered-dashboard-example-2.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+:: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
grafana+: {
|
||||
rawDashboards+:: {
|
||||
'my-dashboard.json': (importstr 'example-grafana-dashboard.json'),
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
### Mixins
|
||||
|
||||
Kube-prometheus comes with a couple of default mixins as the Kubernetes-mixin and the Node-exporter mixin, however there [are many more mixins](https://monitoring.mixins.dev/). To use other mixins Kube-prometheus has a jsonnet library for creating a Kubernetes PrometheusRule CRD and Grafana dashboards from a mixin. Below is an example of creating a mixin object that has Prometheus rules and Grafana dashboards:
|
||||
|
||||
```jsonnet
|
||||
// Import the library function for adding mixins
|
||||
local addMixin = (import 'kube-prometheus/lib/mixin.libsonnet');
|
||||
|
||||
// Create your mixin
|
||||
local myMixin = addMixin({
|
||||
name: 'myMixin',
|
||||
mixin: import 'my-mixin/mixin.libsonnet',
|
||||
});
|
||||
```
|
||||
|
||||
The myMixin object will have two objects - `prometheusRules` and `grafanaDashboards`. The `grafanaDashboards` object will be needed to be added to the `dashboards` field as in the example below:
|
||||
|
||||
```jsonnet
|
||||
values+:: {
|
||||
grafana+:: {
|
||||
dashboards+:: myMixin.grafanaDashboards
|
||||
```
|
||||
|
||||
The `prometheusRules` object is a PrometheusRule Kubernetes CRD and it should be defined as its own jsonnet object. If you define multiple mixins in a single jsonnet object there is a possibility that they will overwrite each others' configuration and there will be unintended effects. Therefore, use the `prometheusRules` object as its own jsonnet object:
|
||||
|
||||
```jsonnet
|
||||
...
|
||||
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ 'external-mixins/my-mixin-prometheus-rules': myMixin.prometheusRules } // one object for each mixin
|
||||
```
|
||||
|
||||
As mentioned above each mixin is configurable and you would configure the mixin as in the example below:
|
||||
|
||||
```jsonnet
|
||||
local myMixin = addMixin({
|
||||
name: 'myMixin',
|
||||
mixin: (import 'my-mixin/mixin.libsonnet') + {
|
||||
_config+:: {
|
||||
myMixinSelector: 'my-selector',
|
||||
interval: '30d', // example
|
||||
},
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
The library has also two optional parameters - the namespace for the `PrometheusRule` CRD and the dashboard folder for the Grafana dashboards. The below example shows how to use both:
|
||||
|
||||
```jsonnet
|
||||
local myMixin = addMixin({
|
||||
name: 'myMixin',
|
||||
namespace: 'prometheus', // default is monitoring
|
||||
dashboardFolder: 'Observability',
|
||||
mixin: (import 'my-mixin/mixin.libsonnet') + {
|
||||
_config+:: {
|
||||
myMixinSelector: 'my-selector',
|
||||
interval: '30d', // example
|
||||
},
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
The created `prometheusRules` object will have the metadata field `namespace` added and the usage will remain the same. However, the `grafanaDasboards` will be added to the `folderDashboards` field instead of the `dashboards` field as shown in the example below:
|
||||
|
||||
```jsonnet
|
||||
values+:: {
|
||||
grafana+:: {
|
||||
folderDashboards+:: {
|
||||
Kubernetes: {
|
||||
...
|
||||
},
|
||||
Misc: {
|
||||
'grafana-home.json': import 'dashboards/misc/grafana-home.json',
|
||||
},
|
||||
} + myMixin.grafanaDashboards
|
||||
```
|
||||
|
||||
Full example of including etcd mixin using method described above:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/mixin-inclusion.jsonnet"
|
||||
local addMixin = (import 'kube-prometheus/lib/mixin.libsonnet');
|
||||
local etcdMixin = addMixin({
|
||||
name: 'etcd',
|
||||
mixin: (import 'github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet') + {
|
||||
_config+: {}, // mixin configuration object
|
||||
},
|
||||
});
|
||||
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
{
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
grafana+: {
|
||||
// Adding new dashboard to grafana. This will modify grafana configMap with dashboards
|
||||
dashboards+: etcdMixin.grafanaDashboards,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
// Rendering prometheusRules object. This is an object compatible with prometheus-operator CRD definition for prometheusRule
|
||||
{ 'external-mixins/etcd-mixin-prometheus-rules': etcdMixin.prometheusRules }
|
||||
```
|
||||
@@ -0,0 +1,122 @@
|
||||
---
|
||||
weight: 500
|
||||
toc: true
|
||||
title: Expose via Ingress
|
||||
menu:
|
||||
docs:
|
||||
parent: kube
|
||||
lead: How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana.
|
||||
images: []
|
||||
draft: false
|
||||
description: How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana.
|
||||
date: "2021-03-08T23:04:32+01:00"
|
||||
---
|
||||
|
||||
In order to access the web interfaces via the Internet [Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a popular option. This guide explains, how Kubernetes Ingress can be setup, in order to expose the Prometheus, Alertmanager and Grafana UIs, that are included in the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project.
|
||||
|
||||
Note: before continuing, it is recommended to first get familiar with the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) stack by itself.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Apart from a running Kubernetes cluster with a running [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) stack, a Kubernetes Ingress controller must be installed and functional. This guide was tested with the [nginx-ingress-controller](https://github.com/kubernetes/ingress-nginx). If you wish to reproduce the exact result in as depicted in this guide we recommend using the nginx-ingress-controller.
|
||||
|
||||
## Setting up Ingress
|
||||
|
||||
The setup of Ingress objects is the same for Prometheus, Alertmanager and Grafana. Therefore this guides demonstrates it in detail for Prometheus as it can easily be adapted for the other applications.
|
||||
|
||||
As monitoring data may contain sensitive data, this guide describes how to setup Ingress with basic auth as an example of minimal security. Of course this should be adapted to the preferred authentication mean of any particular organization, but we feel it is important to at least provide an example with a minimum of security.
|
||||
|
||||
In order to setup basic auth, a secret with the `htpasswd` formatted file needs to be created. To do this, first install the [`htpasswd`](https://httpd.apache.org/docs/2.4/programs/htpasswd.html) tool.
|
||||
|
||||
To create the `htpasswd` formatted file called `auth` run:
|
||||
|
||||
```
|
||||
htpasswd -c auth <username>
|
||||
```
|
||||
|
||||
In order to use this a secret needs to be created containing the name of the `htpasswd`, and with annotations on the Ingress object basic auth can be configured.
|
||||
|
||||
Also, the applications provide external links to themselves in alerts and various places. When an ingress is used in front of the applications these links need to be based on the external URL's. This can be configured for each application in jsonnet.
|
||||
|
||||
```jsonnet
|
||||
local kp =
|
||||
(import 'kube-prometheus/kube-prometheus.libsonnet') +
|
||||
{
|
||||
_config+:: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
prometheus+:: {
|
||||
prometheus+: {
|
||||
spec+: {
|
||||
externalUrl: 'http://prometheus.example.com',
|
||||
},
|
||||
},
|
||||
},
|
||||
ingress+:: {
|
||||
'prometheus-k8s': {
|
||||
apiVersion: 'networking.k8s.io/v1',
|
||||
kind: 'Ingress',
|
||||
metadata: {
|
||||
name: $.prometheus.prometheus.metadata.name,
|
||||
namespace: $.prometheus.prometheus.metadata.namespace,
|
||||
annotations: {
|
||||
'nginx.ingress.kubernetes.io/auth-type': 'basic',
|
||||
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
|
||||
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
|
||||
},
|
||||
},
|
||||
spec: {
|
||||
rules: [{
|
||||
host: 'prometheus.example.com',
|
||||
http: {
|
||||
paths: [{
|
||||
backend: {
|
||||
service: {
|
||||
name: $.prometheus.service.metadata.name,
|
||||
port: 'web',
|
||||
},
|
||||
},
|
||||
}],
|
||||
},
|
||||
}],
|
||||
},
|
||||
},
|
||||
} + {
|
||||
ingress+:: {
|
||||
'basic-auth-secret': {
|
||||
apiVersion: 'v1',
|
||||
kind: 'Secret',
|
||||
metadata: {
|
||||
name: 'basic-auth',
|
||||
namespace: $._config.namespace,
|
||||
},
|
||||
data: { auth: std.base64(importstr 'auth') },
|
||||
type: 'Opaque',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
k.core.v1.list.new([
|
||||
kp.ingress['prometheus-k8s'],
|
||||
kp.ingress['basic-auth-secret'],
|
||||
])
|
||||
```
|
||||
|
||||
In order to expose Alertmanager and Grafana, simply create additional fields containing an ingress object, but simply pointing at the `alertmanager` or `grafana` instead of the `prometheus-k8s` Service. Make sure to also use the correct port respectively, for Alertmanager it is also `web`, for Grafana it is `http`. Be sure to also specify the appropriate external URL. Note that the external URL for grafana is set in a different way than the external URL for Prometheus or Alertmanager. See [ingress.jsonnet](../../examples/ingress.jsonnet) for how to set the Grafana external URL.
|
||||
|
||||
In order to render the ingress objects similar to the other objects use as demonstrated in the [main readme](../../README.md):
|
||||
|
||||
```
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['ingress-' + name]: kp.ingress[name] for name in std.objectFields(kp.ingress) }
|
||||
```
|
||||
|
||||
Note, that in comparison only the last line was added, the rest is identical to the original.
|
||||
|
||||
See [ingress.jsonnet](../../examples/ingress.jsonnet) for an example implementation.
|
||||
81
docs/customizations/monitoring-additional-namespaces.md
Normal file
81
docs/customizations/monitoring-additional-namespaces.md
Normal file
@@ -0,0 +1,81 @@
|
||||
### Monitoring additional namespaces
|
||||
|
||||
In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$.values.namespace`. This is specified in `$.values.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/additional-namespaces.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
|
||||
prometheus+: {
|
||||
namespaces+: ['my-namespace', 'my-second-namespace'],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
#### Defining the ServiceMonitor for each additional Namespace
|
||||
|
||||
In order to Prometheus be able to discovery and scrape services inside the additional namespaces specified in previous step you need to define a ServiceMonitor resource.
|
||||
|
||||
> Typically it is up to the users of a namespace to provision the ServiceMonitor resource, but in case you want to generate it with the same tooling as the rest of the cluster monitoring infrastructure, this is a guide on how to achieve this.
|
||||
|
||||
You can define ServiceMonitor resources in your `jsonnet` spec. See the snippet bellow:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/additional-namespaces-servicemonitor.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
prometheus+:: {
|
||||
namespaces+: ['my-namespace', 'my-second-namespace'],
|
||||
},
|
||||
},
|
||||
exampleApplication: {
|
||||
serviceMonitorMyNamespace: {
|
||||
apiVersion: 'monitoring.coreos.com/v1',
|
||||
kind: 'ServiceMonitor',
|
||||
metadata: {
|
||||
name: 'my-servicemonitor',
|
||||
namespace: 'my-namespace',
|
||||
},
|
||||
spec: {
|
||||
jobLabel: 'app',
|
||||
endpoints: [
|
||||
{
|
||||
port: 'http-metrics',
|
||||
},
|
||||
],
|
||||
selector: {
|
||||
matchLabels: {
|
||||
'app.kubernetes.io/name': 'myapp',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
|
||||
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
|
||||
```
|
||||
|
||||
> NOTE: make sure your service resources have the right labels (eg. `'app': 'myapp'`) applied. Prometheus uses kubernetes labels to discover resources inside the namespaces.
|
||||
29
docs/customizations/monitoring-all-namespaces.md
Normal file
29
docs/customizations/monitoring-all-namespaces.md
Normal file
@@ -0,0 +1,29 @@
|
||||
### Monitoring all namespaces
|
||||
|
||||
In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them.
|
||||
|
||||
```jsonnet mdox-exec="cat examples/all-namespaces.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
(import 'kube-prometheus/addons/all-namespaces.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
prometheus+: {
|
||||
namespaces: [],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
> NOTE: This configuration can potentially make your cluster insecure especially in a multi-tenant cluster. This is because this gives Prometheus visibility over the whole cluster which might not be expected in a scenario when certain namespaces are locked down for security reasons.
|
||||
|
||||
Proceed with [creating ServiceMonitors for the services in the namespaces](monitoring-additional-namespaces.md#defining-the-servicemonitor-for-each-additional-namespace) you actually want to monitor
|
||||
8
docs/customizations/node-ports.md
Normal file
8
docs/customizations/node-ports.md
Normal file
@@ -0,0 +1,8 @@
|
||||
### NodePorts
|
||||
|
||||
Another mixin that may be useful for exploring the stack is to expose the UIs of Prometheus, Alertmanager and Grafana on NodePorts:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/jsonnet-snippets/node-ports.jsonnet"
|
||||
(import 'kube-prometheus/main.libsonnet') +
|
||||
(import 'kube-prometheus/addons/node-ports.libsonnet')
|
||||
```
|
||||
25
docs/customizations/platform-specific.md
Normal file
25
docs/customizations/platform-specific.md
Normal file
@@ -0,0 +1,25 @@
|
||||
### Running kube-prometheus on specific platforms
|
||||
|
||||
A common example is that not all Kubernetes clusters are created exactly the same way, meaning the configuration to monitor them may be slightly different. For the following clusters there are mixins available to easily configure them:
|
||||
|
||||
* aws
|
||||
* bootkube
|
||||
* eks
|
||||
* gke
|
||||
* kops
|
||||
* kops_coredns
|
||||
* kubeadm
|
||||
* kubespray
|
||||
|
||||
These mixins are selectable via the `platform` field of kubePrometheus:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/jsonnet-snippets/platform.jsonnet"
|
||||
(import 'kube-prometheus/main.libsonnet') +
|
||||
{
|
||||
values+:: {
|
||||
common+: {
|
||||
platform: 'example-platform',
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
23
docs/customizations/pod-anti-affinity.md
Normal file
23
docs/customizations/pod-anti-affinity.md
Normal file
@@ -0,0 +1,23 @@
|
||||
### Pod Anti-Affinity
|
||||
|
||||
To prevent `Prometheus` and `Alertmanager` instances from being deployed onto the same node when
|
||||
possible, one can include the [kube-prometheus-anti-affinity.libsonnet](../../jsonnet/kube-prometheus/addons/anti-affinity.libsonnet) mixin:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/anti-affinity.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
(import 'kube-prometheus/addons/anti-affinity.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
66
docs/customizations/static-etcd-configuration.md
Normal file
66
docs/customizations/static-etcd-configuration.md
Normal file
@@ -0,0 +1,66 @@
|
||||
### Static etcd configuration
|
||||
|
||||
In order to configure a static etcd cluster to scrape there is a simple [static-etcd.libsonnet](../../jsonnet/kube-prometheus/addons/static-etcd.libsonnet) mixin prepared.
|
||||
|
||||
An example of how to use it can be seen below:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/etcd.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
(import 'kube-prometheus/addons/static-etcd.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
|
||||
etcd+: {
|
||||
// Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commas to separate multiple values).
|
||||
ips: ['127.0.0.1'],
|
||||
|
||||
// Reference info:
|
||||
// * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec (has endpoints)
|
||||
// * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint (has tlsConfig)
|
||||
// * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#tlsconfig (has: caFile, certFile, keyFile, serverName, & insecureSkipVerify)
|
||||
|
||||
// Set these three variables to the fully qualified directory path on your work machine to the certificate files that are valid to scrape etcd metrics with (check the apiserver container).
|
||||
// Most likely these certificates are generated somewhere in an infrastructure repository, so using the jsonnet `importstr` function can
|
||||
// be useful here. (Kube-aws stores these three files inside the credential folder.)
|
||||
// All the sensitive information on the certificates will end up in a Kubernetes Secret.
|
||||
clientCA: importstr 'etcd-client-ca.crt',
|
||||
clientKey: importstr 'etcd-client.key',
|
||||
clientCert: importstr 'etcd-client.crt',
|
||||
|
||||
// Note that you should specify a value EITHER for 'serverName' OR for 'insecureSkipVerify'. (Don't specify a value for both of them, and don't specify a value for neither of them.)
|
||||
// * Specifying serverName: Ideally you should provide a valid value for serverName (and then insecureSkipVerify should be left as false - so that serverName gets used).
|
||||
// * Specifying insecureSkipVerify: insecureSkipVerify is only to be used (i.e. set to true) if you cannot (based on how your etcd certificates were created) use a Subject Alternative Name.
|
||||
// * If you specify a value:
|
||||
// ** for both of these variables: When 'insecureSkipVerify: true' is specified, then also specifying a value for serverName won't hurt anything but it will be ignored.
|
||||
// ** for neither of these variables: then you'll get authentication errors on the prom '/targets' page with your etcd targets.
|
||||
|
||||
// A valid name (DNS or Subject Alternative Name) that the client (i.e. prometheus) will use to verify the etcd TLS certificate.
|
||||
// * Note that doing `nslookup etcd.kube-system.svc.cluster.local` (on a pod in a K8s cluster where kube-prometheus has been installed) shows that kube-prometheus sets up this hostname.
|
||||
// * `openssl x509 -noout -text -in etcd-client.pem` will print the Subject Alternative Names.
|
||||
serverName: 'etcd.kube-system.svc.cluster.local',
|
||||
|
||||
// When insecureSkipVerify isn't specified, the default value is "false".
|
||||
//insecureSkipVerify: true,
|
||||
|
||||
// In case you have generated the etcd certificate with kube-aws:
|
||||
// * If you only have one etcd node, you can use the value from 'etcd.internalDomainName' (specified in your kube-aws cluster.yaml) as the value for 'serverName'.
|
||||
// * But if you have multiple etcd nodes, you will need to use 'insecureSkipVerify: true' (if using default certificate generators method), as the valid certificate domain
|
||||
// will be different for each etcd node. (kube-aws default certificates are not valid against the IP - they were created for the DNS.)
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
|
||||
If you'd like to monitor an etcd instance that lives outside the cluster, see [Monitoring external etcd](../monitoring-external-etcd.md) for more information.
|
||||
|
||||
> Note that monitoring etcd in minikube is currently not possible because of how etcd is setup. (minikube's etcd binds to 127.0.0.1:2379 only, and within host networking namespace.)
|
||||
23
docs/customizations/strip-limits.md
Normal file
23
docs/customizations/strip-limits.md
Normal file
@@ -0,0 +1,23 @@
|
||||
### Stripping container resource limits
|
||||
|
||||
Sometimes in small clusters, the CPU/memory limits can get high enough for alerts to be fired continuously. To prevent this, one can strip off the predefined limits.
|
||||
To do that, one can import the following mixin
|
||||
|
||||
```jsonnet mdox-exec="cat examples/strip-limits.jsonnet"
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') +
|
||||
(import 'kube-prometheus/addons/strip-limits.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
39
docs/customizations/using-custom-container-registry.md
Normal file
39
docs/customizations/using-custom-container-registry.md
Normal file
@@ -0,0 +1,39 @@
|
||||
### Internal Registry
|
||||
|
||||
Some Kubernetes installations source all their images from an internal registry. kube-prometheus supports this use case and helps the user synchronize every image it uses to the internal registry and generate manifests pointing at the internal registry.
|
||||
|
||||
To produce the `docker pull/tag/push` commands that will synchronize upstream images to `internal-registry.com/organization` (after having run the `jb` command to populate the vendor directory):
|
||||
|
||||
```shell
|
||||
$ jsonnet -J vendor -S --tla-str repository=internal-registry.com/organization sync-to-internal-registry.jsonnet
|
||||
$ docker pull k8s.gcr.io/addon-resizer:1.8.4
|
||||
$ docker tag k8s.gcr.io/addon-resizer:1.8.4 internal-registry.com/organization/addon-resizer:1.8.4
|
||||
$ docker push internal-registry.com/organization/addon-resizer:1.8.4
|
||||
$ docker pull quay.io/prometheus/alertmanager:v0.16.2
|
||||
$ docker tag quay.io/prometheus/alertmanager:v0.16.2 internal-registry.com/organization/alertmanager:v0.16.2
|
||||
$ docker push internal-registry.com/organization/alertmanager:v0.16.2
|
||||
...
|
||||
```
|
||||
|
||||
The output of this command can be piped to a shell to be executed by appending `| sh`.
|
||||
|
||||
Then to generate manifests with `internal-registry.com/organization`, use the `withImageRepository` mixin:
|
||||
|
||||
```jsonnet mdox-exec="cat examples/internal-registry.jsonnet"
|
||||
local mixin = import 'kube-prometheus/addons/config-mixins.libsonnet';
|
||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||
values+:: {
|
||||
common+: {
|
||||
namespace: 'monitoring',
|
||||
},
|
||||
},
|
||||
} + mixin.withImageRepository('internal-registry.com/organization');
|
||||
|
||||
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
|
||||
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
|
||||
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
|
||||
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
|
||||
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
|
||||
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
|
||||
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
|
||||
```
|
||||
Reference in New Issue
Block a user