Compare commits

..

35 Commits

Author SHA1 Message Date
Paweł Krupa
7f94cfff2e Merge pull request #1494 from PhilipGough/revert-1406-dropped-cadvisor-metrics-6 2021-11-24 13:13:18 +01:00
Philip Gough
6783b6df04 Revert "Adjust dropped metrics from cAdvisor" 2021-11-10 10:30:04 +00:00
Paweł Krupa
9d5c3cece3 Merge pull request #1440 from prometheus-operator/automated-updates-release-0.6 2021-10-18 10:42:18 +02:00
dgrisonnet
a0999ff8d3 [bot] [release-0.6] Automated version update 2021-10-18 07:39:25 +00:00
Damien Grisonnet
5d07b5d659 Merge pull request #1429 from prometheus-operator/automated-updates-release-0.6
[bot] [release-0.6] Automated version update
2021-10-12 09:20:11 +02:00
dgrisonnet
8e257a945f [bot] [release-0.6] Automated version update 2021-10-11 07:39:26 +00:00
Damien Grisonnet
0cbec5b320 Merge pull request #1406 from PhilipGough/dropped-cadvisor-metrics-6
This change drops pod-centric metrics without a non-empty 'container'…
2021-09-28 12:02:28 +02:00
Philip Gough
d714141304 This change drops pod-centric metrics without a non-empty 'container' label.
Previously we dropped pod-centric metrics without a (pod, namespace) label set
however these can be critical for debugging.

Keep 'container_fs_.*' metrics from cAdvisor
2021-09-28 10:53:20 +01:00
Arthur Silva Sens
ccdb3781ca Merge pull request #1355 from PhilipGough/bz-199074
jsonnet: Drop cAdvisor metrics with no (pod, namespace) labels while …
2021-09-02 17:17:13 -03:00
Philip Gough
47e55a460e jsonnet: Drop cAdvisor metrics with no (pod, namespace) labels while preserving ability to monitor system services resource usage
The following provides a description and cardinality estimation based on the tests in a local cluster:

container_blkio_device_usage_total - useful for containers, but not for system services (nodes*disks*services*operations*2)
container_fs_.*                    - add filesystem read/write data (nodes*disks*services*4)
container_file_descriptors         - file descriptors limits and global numbers are exposed via (nodes*services)
container_threads_max              - max number of threads in cgroup. Usually for system services it is not limited (nodes*services)
container_threads                  - used threads in cgroup. Usually not important for system services (nodes*services)
container_sockets                  - used sockets in cgroup. Usually not important for system services (nodes*services)
container_start_time_seconds       - container start. Possibly not needed for system services (nodes*services)
container_last_seen                - Not needed as system services are always running (nodes*services)
container_spec_.*                  - Everything related to cgroup specification and thus static data (nodes*services*5)
2021-08-30 12:45:20 +01:00
Damien Grisonnet
dcc97b6f38 Merge pull request #1329 from prometheus-operator/automated-updates-release-0.6
[bot] [release-0.6] Automated version update
2021-08-17 10:19:16 +02:00
dgrisonnet
684dbc0bd5 [bot] [release-0.6] Automated version update 2021-08-17 08:05:35 +00:00
Paweł Krupa
80a830422d Merge pull request #1311 from dgrisonnet/ci-release-0.6 2021-08-16 10:16:32 +02:00
Philip Gough
3c8b17cb0b ci: Harden action to wait for kind cluster readiness 2021-08-09 18:23:03 +02:00
Damien Grisonnet
24e0682b05 ci: replace travis CI by github actions
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-09 18:22:01 +02:00
Paweł Krupa
8cb3f62e67 Merge pull request #1296 from dgrisonnet/1245-release-0.6
release-0.6: *: add "update" target to makefile and use it in automatic updater
2021-08-02 17:56:50 +02:00
paulfantom
e364778771 *: add "update" target to makefile and use it in automatic updater
Signed-off-by: paulfantom <pawel@krupa.net.pl>
2021-08-02 12:41:58 +02:00
Paweł Krupa
95ba62c107 Merge pull request #976 from vshn/0.6/pin-dependencies
[release-0.6] Pin Jsonnet dependencies
2021-02-24 15:21:42 +01:00
Simon Rüegg
a09aff9709 [release-0.6] Pin Jsonnet dependencies
Pin all Jsonnet dependencies to current commit SHA.

Signed-off-by: Simon Rüegg <simon@rueggs.ch>
2021-02-24 15:01:02 +01:00
Paweł Krupa
b2b90f25b8 Merge pull request #935 from underrun/fix-etcd-mixin-move
Pin etcd-mixin to last working version for release 0.6
2021-02-11 23:09:25 +01:00
Derek Wilson
d9465ce7a3 pin etcd-mixin to last working version for release
etcd refactored their repo moving and renaming etcd-mixin. the
jsonnetfile depended on "master" even though the lock was for an older
version. checking out from the last commit before the move works.
2021-02-11 18:08:25 +00:00
Lili Cosic
f69ff3d63d Merge pull request #687 from lilic/bump-to-patch-prom-operator
[release-0.6]: Bump to prometheus-operator 0.42.1
2020-09-23 13:55:22 +02:00
Lili Cosic
5550829c60 manifests: Regenerate 2020-09-23 11:45:08 +02:00
Lili Cosic
8b07a38917 jsonnetfile.lock.json: jb update 2020-09-23 11:42:25 +02:00
Sergiusz Urbaniak
3dfe4ee112 Merge pull request #669 from lilic/test-against-1.19
Test against 1.18 and 1.19
2020-09-22 10:10:38 +02:00
Sergiusz Urbaniak
09199d875c Merge pull request #684 from s-urbaniak/release-0.6
jsonnet: bump to prometheus-operator 0.42
2020-09-21 11:27:51 +02:00
Sergiusz Urbaniak
05b7a932ab jsonnet: bump to prometheus-operator 0.42 2020-09-21 10:51:24 +02:00
Lili Cosic
3bb92838bf Test against 1.18 and 1.19 2020-09-09 18:09:31 +02:00
Sergiusz Urbaniak
82fe2a3b06 Merge pull request #665 from s-urbaniak/node-exporter-max-unavailable-0.6
Backport: node-exporter: set maxUnavailable to 10%
2020-09-03 10:48:05 +02:00
Scott Dodson
87fabbc077 node-exporter: set maxUnavailable to 10%
This daemonset doesn't affect workload availability so allow its rollout to
be parallelized.
2020-09-03 10:01:53 +02:00
Frederic Branczyk
f1d92c8a80 Merge pull request #635 from lilic/cherry-pick-alerts
[release-0.6] jsonnet/prometheus-operator.libsonnet: Adjust alerts range
2020-08-06 15:40:04 +02:00
Lili Cosic
09a305bc0e manifests/prometheus-rules.yaml: Regenerate 2020-08-06 15:08:37 +02:00
Lili Cosic
f8b4c681a6 jsonnet/prometheus-operator.libsonnet: Adjust alerts range 2020-08-06 15:06:53 +02:00
Lili Cosic
85e22303f0 Merge pull request #627 from lilic/pin-mixin
Pin kube-mixin project to latest release
2020-08-03 13:33:17 +02:00
Lili Cosic
c0ff2c9f2d Pin kube-mixin project to latest release 2020-08-03 13:24:32 +02:00
236 changed files with 16005 additions and 17111 deletions

View File

@@ -4,7 +4,7 @@ on:
- pull_request
env:
golang-version: '1.15'
kind-version: 'v0.9.0'
kind-version: 'v0.11.1'
jobs:
generate:
runs-on: ${{ matrix.os }}
@@ -16,35 +16,15 @@ jobs:
name: Generate
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
- uses: actions/setup-go@v2
with:
go-version: ${{ env.golang-version }}
- run: make --always-make generate validate && git diff --exit-code
lint:
runs-on: ubuntu-latest
name: Jsonnet linter
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
- run: make --always-make lint
fmt:
runs-on: ubuntu-latest
name: Jsonnet formatter
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
- run: make --always-make fmt && git diff --exit-code
- run: make --always-make generate && git diff --exit-code
unit-tests:
runs-on: ubuntu-latest
name: Unit tests
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
- run: make --always-make test
e2e-tests:
name: E2E tests
@@ -52,24 +32,17 @@ jobs:
strategy:
matrix:
kind-image:
- 'kindest/node:v1.20.0'
# - 'kindest/node:v1.21.0' #TODO(paulfantom): enable as soon as image is available
- 'kindest/node:v1.19.11'
steps:
- uses: actions/checkout@v2
with:
persist-credentials: false
- name: Start KinD
uses: engineerd/setup-kind@v0.5.0
with:
version: ${{ env.kind-version }}
image: ${{ matrix.kind-image }}
wait: 300s
- name: Wait for cluster to finish bootstraping
run: |
until [ "$(kubectl get pods --all-namespaces --no-headers | grep -cEv '([0-9]+)/\1')" -eq 0 ]; do
sleep 5s
done
kubectl cluster-info
kubectl get pods -A
run: kubectl wait --for=condition=Ready pods --all --all-namespaces --timeout=300s
- name: Create kube-prometheus stack
run: |
kubectl create -f manifests/setup

1
.gitignore vendored
View File

@@ -3,4 +3,3 @@ minikube-manifests/
vendor/
./auth
.swp
crdschemas/

View File

@@ -1,26 +0,0 @@
tasks:
- init: |
make --always-make
export PATH="$(pwd)/tmp/bin:${PATH}"
cat > ${PWD}/.git/hooks/pre-commit <<EOF
#!/bin/bash
echo "Checking jsonnet fmt"
make fmt > /dev/null 2>&1
echo "Checking if manifests are correct"
make generate > /dev/null 2>&1
git diff --exit-code
if [[ \$? == 1 ]]; then
echo "
This commit is being rejected because the YAML manifests are incorrect or jsonnet needs to be formatted."
echo "Please commit your changes again!"
exit 1
fi
EOF
chmod +x ${PWD}/.git/hooks/pre-commit
vscode:
extensions:
- heptio.jsonnet@0.1.0:woEDU5N62LRdgdz0g/I6sQ==

View File

@@ -1,15 +1,15 @@
SHELL=/bin/bash -o pipefail
export GO111MODULE=on
BIN_DIR?=$(shell pwd)/tmp/bin
EMBEDMD_BIN=$(BIN_DIR)/embedmd
JB_BIN=$(BIN_DIR)/jb
GOJSONTOYAML_BIN=$(BIN_DIR)/gojsontoyaml
JSONNET_BIN=$(BIN_DIR)/jsonnet
JSONNETLINT_BIN=$(BIN_DIR)/jsonnet-lint
JSONNETFMT_BIN=$(BIN_DIR)/jsonnetfmt
KUBECONFORM_BIN=$(BIN_DIR)/kubeconform
TOOLING=$(EMBEDMD_BIN) $(JB_BIN) $(GOJSONTOYAML_BIN) $(JSONNET_BIN) $(JSONNETLINT_BIN) $(JSONNETFMT_BIN) $(KUBECONFORM_BIN)
TOOLING=$(EMBEDMD_BIN) $(JB_BIN) $(GOJSONTOYAML_BIN) $(JSONNET_BIN) $(JSONNETFMT_BIN)
JSONNETFMT_ARGS=-n 2 --max-blank-lines 2 --string-style s --comment-style s
@@ -33,24 +33,15 @@ vendor: $(JB_BIN) jsonnetfile.json jsonnetfile.lock.json
rm -rf vendor
$(JB_BIN) install
crdschemas: vendor
./scripts/generate-schemas.sh
.PHONY: validate
validate: crdschemas manifests $(KUBECONFORM_BIN)
# Follow-up on https://github.com/instrumenta/kubernetes-json-schema/issues/26 if validations start failing
$(KUBECONFORM_BIN) -schema-location 'https://kubernetesjsonschema.dev' -schema-location 'crdschemas/{{ .ResourceKind }}.json' -skip CustomResourceDefinition manifests/
.PHONY: update
update: $(JB_BIN)
$(JB_BIN) update
.PHONY: fmt
fmt: $(JSONNETFMT_BIN)
find . -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \
find . -name 'vendor' -prune -o -name '*.libsonnet' -o -name '*.jsonnet' -print | \
xargs -n 1 -- $(JSONNETFMT_BIN) $(JSONNETFMT_ARGS) -i
.PHONY: lint
lint: $(JSONNETLINT_BIN) vendor
find jsonnet/ -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \
xargs -n 1 -- $(JSONNETLINT_BIN) -J vendor
.PHONY: test
test: $(JB_BIN)
$(JB_BIN) install
@@ -65,4 +56,4 @@ $(BIN_DIR):
$(TOOLING): $(BIN_DIR)
@echo Installing tools from scripts/tools.go
@cd scripts && cat tools.go | grep _ | awk -F'"' '{print $$2}' | xargs -tI % go build -modfile=go.mod -o $(BIN_DIR) %
@cat scripts/tools.go | grep _ | awk -F'"' '{print $$2}' | GOBIN=$(BIN_DIR) xargs -tI % go install %

14
OWNERS Normal file
View File

@@ -0,0 +1,14 @@
reviewers:
- brancz
- metalmatze
- mxinden
- s-urbaniak
- squat
- paulfantom
approvers:
- brancz
- metalmatze
- mxinden
- s-urbaniak
- squat
- paulfantom

414
README.md
View File

@@ -1,9 +1,5 @@
# kube-prometheus
[![Build Status](https://github.com/prometheus-operator/kube-prometheus/workflows/ci/badge.svg)](https://github.com/prometheus-operator/kube-prometheus/actions)
[![Slack](https://img.shields.io/badge/join%20slack-%23prometheus--operator-brightgreen.svg)](http://slack.k8s.io/)
[![Gitpod ready-to-code](https://img.shields.io/badge/Gitpod-ready--to--code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/prometheus-operator/kube-prometheus)
> Note that everything is experimental and may change significantly at any time.
This repository collects Kubernetes manifests, [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with [Prometheus](https://prometheus.io/) using the Prometheus Operator.
@@ -12,7 +8,7 @@ The content of this project is written in [jsonnet](http://jsonnet.org/). This p
Components included in this package:
* The [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)
* The [Prometheus Operator](https://github.com/coreos/prometheus-operator)
* Highly available [Prometheus](https://prometheus.io/)
* Highly available [Alertmanager](https://github.com/prometheus/alertmanager)
* [Prometheus node-exporter](https://github.com/prometheus/node_exporter)
@@ -22,14 +18,9 @@ Components included in this package:
This stack is meant for cluster monitoring, so it is pre-configured to collect metrics from all Kubernetes components. In addition to that it delivers a default set of dashboards and alerting rules. Many of the useful dashboards and alerts come from the [kubernetes-mixin project](https://github.com/kubernetes-monitoring/kubernetes-mixin), similar to this project it provides composable jsonnet as a library for users to customize to their needs.
## Warning
If you are migrating from `release-0.7` branch or earlier please read [what changed and how to migrate in our guide](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/migration-guide.md).
## Table of contents
- [kube-prometheus](#kube-prometheus)
- [Warning](#warning)
- [Table of contents](#table-of-contents)
- [Prerequisites](#prerequisites)
- [minikube](#minikube)
@@ -56,22 +47,17 @@ If you are migrating from `release-0.7` branch or earlier please read [what chan
- [Alertmanager configuration](#alertmanager-configuration)
- [Adding additional namespaces to monitor](#adding-additional-namespaces-to-monitor)
- [Defining the ServiceMonitor for each additional Namespace](#defining-the-servicemonitor-for-each-additional-namespace)
- [Monitoring all namespaces](#monitoring-all-namespaces)
- [Static etcd configuration](#static-etcd-configuration)
- [Pod Anti-Affinity](#pod-anti-affinity)
- [Stripping container resource limits](#stripping-container-resource-limits)
- [Customizing Prometheus alerting/recording rules and Grafana dashboards](#customizing-prometheus-alertingrecording-rules-and-grafana-dashboards)
- [Exposing Prometheus/Alermanager/Grafana via Ingress](#exposing-prometheusalermanagergrafana-via-ingress)
- [Setting up a blackbox exporter](#setting-up-a-blackbox-exporter)
- [Minikube Example](#minikube-example)
- [Continuous Delivery](#continuous-delivery)
- [Troubleshooting](#troubleshooting)
- [Error retrieving kubelet metrics](#error-retrieving-kubelet-metrics)
- [Authentication problem](#authentication-problem)
- [Authorization problem](#authorization-problem)
- [kube-state-metrics resource usage](#kube-state-metrics-resource-usage)
- [Contributing](#contributing)
- [License](#license)
## Prerequisites
@@ -80,7 +66,7 @@ You will need a Kubernetes cluster, that's it! By default it is assumed, that th
This means the kubelet configuration must contain these flags:
* `--authentication-token-webhook=true` This flag enables, that a `ServiceAccount` token can be used to authenticate against the kubelet(s). This can also be enabled by setting the kubelet configuration value `authentication.webhook.enabled` to `true`.
* `--authorization-mode=Webhook` This flag enables, that the kubelet will perform an RBAC request with the API to determine, whether the requesting entity (Prometheus in this case) is allowed to access a resource, in specific for this project the `/metrics` endpoint. This can also be enabled by setting the kubelet configuration value `authorization.mode` to `Webhook`.
* `--authorization-mode=Webhook` This flag enables, that the kubelet will perform an RBAC request with the API to determine, whether the requesting entity (Prometheus in this case) is allow to access a resource, in specific for this project the `/metrics` endpoint. This can also be enabled by setting the kubelet configuration value `authorization.mode` to `Webhook`.
This stack provides [resource metrics](https://github.com/kubernetes/metrics#resource-metrics-api) by deploying the [Prometheus Adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter/).
This adapter is an Extension API Server and Kubernetes needs to be have this feature enabled, otherwise the adapter has no effect, but is still deployed.
@@ -90,7 +76,7 @@ This adapter is an Extension API Server and Kubernetes needs to be have this fea
To try out this stack, start [minikube](https://github.com/kubernetes/minikube) with the following command:
```shell
$ minikube delete && minikube start --kubernetes-version=v1.20.0 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
$ minikube delete && minikube start --kubernetes-version=v1.18.1 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
```
The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:
@@ -105,17 +91,19 @@ $ minikube addons disable metrics-server
The following versions are supported and work as we test against these versions in their respective branches. But note that other versions might work!
| kube-prometheus stack | Kubernetes 1.18 | Kubernetes 1.19 | Kubernetes 1.20 | Kubernetes 1.21 |
|-----------------------|-----------------|-----------------|-----------------|-----------------|
| `release-0.5` | ✔ | | ✗ | ✗ |
| `release-0.6` | ✗ | ✔ | ✗ | ✗ |
| `release-0.7` | ✗ | | ✔ | ✗ |
| `release-0.8` | ✗ | ✗ | ✔ | ✔ |
| `HEAD` | ✗ | ✗ | ✔ | ✔ |
| kube-prometheus stack | Kubernetes 1.14 | Kubernetes 1.15 | Kubernetes 1.16 | Kubernetes 1.17 | Kubernetes 1.18 | Kubernetes 1.19 |
|-----------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| `release-0.3` | ✔ | ✔ | ✔ | ✔ | ✗ | ✗ |
| `release-0.4` | ✗ | ✗ | ✔ (v1.16.5+) | ✔ | ✗ | ✗ |
| `release-0.5` | ✗ | ✗ | ✗ | ✗ | ✔ | ✗ |
| `release-0.6` | ✗ | ✗ | ✗ | ✗ | ✔ | ✔ |
| `HEAD` | ✗ | ✗ | ✗ | ✗ | ✔ | ✗ |
Note: Due to [two](https://github.com/kubernetes/kubernetes/issues/83778) [bugs](https://github.com/kubernetes/kubernetes/issues/86359) in Kubernetes v1.16.1, and prior to Kubernetes v1.16.5 the kube-prometheus release-0.4 branch only supports v1.16.5 and higher. The `extension-apiserver-authentication-reader` role in the kube-system namespace can be manually edited to include list and watch permissions in order to workaround the second issue with Kubernetes v1.16.2 through v1.16.4.
## Quickstart
>Note: For versions before Kubernetes v1.20.z refer to the [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix) in order to choose a compatible branch.
>Note: For versions before Kubernetes v1.18.z refer to the [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix) in order to choose a compatible branch.
This project is intended to be used as a library (i.e. the intent is not for you to create your own modified copy of this repository).
@@ -123,7 +111,7 @@ Though for a quickstart a compiled version of the Kubernetes [manifests](manifes
* Create the monitoring stack using the config in the `manifests` directory:
```shell
# Create the namespace and CRDs, and then wait for them to be available before creating the remaining resources
# Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
kubectl create -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl create -f manifests/
@@ -184,15 +172,12 @@ Install this library in your own project with [jsonnet-bundler](https://github.c
$ mkdir my-kube-prometheus; cd my-kube-prometheus
$ jb init # Creates the initial/empty `jsonnetfile.json`
# Install the kube-prometheus dependency
$ jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@release-0.7 # Creates `vendor/` & `jsonnetfile.lock.json`, and fills in `jsonnetfile.json`
$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/release-0.7/example.jsonnet -O example.jsonnet
$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/release-0.7/build.sh -O build.sh
$ jb install github.com/coreos/kube-prometheus/jsonnet/kube-prometheus@release-0.4 # Creates `vendor/` & `jsonnetfile.lock.json`, and fills in `jsonnetfile.json`
```
> `jb` can be installed with `go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb`
> An e.g. of how to install a given version of this library: `jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@release-0.7`
> An e.g. of how to install a given version of this library: `jb install github.com/coreos/kube-prometheus/jsonnet/kube-prometheus@release-0.4`
In order to update the kube-prometheus dependency, simply use the jsonnet-bundler update functionality:
```shell
@@ -203,7 +188,7 @@ $ jb update
e.g. of how to compile the manifests: `./build.sh example.jsonnet`
> before compiling, install `gojsontoyaml` tool with `go get github.com/brancz/gojsontoyaml` and `jsonnet` with `go get github.com/google/go-jsonnet/cmd/jsonnet`
> before compiling, install `gojsontoyaml` tool with `go get github.com/brancz/gojsontoyaml`
Here's [example.jsonnet](example.jsonnet):
@@ -212,39 +197,33 @@ Here's [example.jsonnet](example.jsonnet):
[embedmd]:# (example.jsonnet)
```jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-custom-metrics.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
_config+:: {
namespace: 'monitoring',
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
And here's the [build.sh](build.sh) script (which uses `vendor/` to render all manifests in a json structure of `{filename: manifest-content}`):
@@ -285,7 +264,7 @@ The previous steps (compilation) has created a bunch of manifest files in the ma
Now simply use `kubectl` to install Prometheus and Grafana as per your configuration:
```shell
# Update the namespace and CRDs, and then wait for them to be available before creating the remaining resources
# Update the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/
```
@@ -327,22 +306,74 @@ Once updated, just follow the instructions under "Compiling" and "Apply the kube
Jsonnet has the concept of hidden fields. These are fields, that are not going to be rendered in a result. This is used to configure the kube-prometheus components in jsonnet. In the example jsonnet code of the above [Customizing Kube-Prometheus section](#customizing-kube-prometheus), you can see an example of this, where the `namespace` is being configured to be `monitoring`. In order to not override the whole object, use the `+::` construct of jsonnet, to merge objects, this way you can override individual settings, but retain all other settings and defaults.
The available fields and their default values can be seen in [main.libsonnet](jsonnet/kube-prometheus/main.libsonnet). Note that many of the fields get their default values from variables, and for example the version numbers are imported from [versions.json](jsonnet/kube-prometheus/versions.json).
Configuration is mainly done in the `values` map. You can see this being used in the `example.jsonnet` to set the namespace to `monitoring`. This is done in the `common` field, which all other components take their default value from. See for example how Alertmanager is configured in `main.libsonnet`:
These are the available fields with their respective default values:
```
alertmanager: {
name: 'main',
// Use the namespace specified under values.common by default.
namespace: $.values.common.namespace,
version: $.values.common.versions.alertmanager,
image: $.values.common.images.alertmanager,
mixin+: { ruleLabels: $.values.common.ruleLabels },
{
_config+:: {
namespace: "default",
versions+:: {
alertmanager: "v0.17.0",
nodeExporter: "v0.18.1",
kubeStateMetrics: "v1.5.0",
kubeRbacProxy: "v0.4.1",
prometheusOperator: "v0.30.0",
prometheus: "v2.10.0",
},
imageRepos+:: {
prometheus: "quay.io/prometheus/prometheus",
alertmanager: "quay.io/prometheus/alertmanager",
kubeStateMetrics: "quay.io/coreos/kube-state-metrics",
kubeRbacProxy: "quay.io/coreos/kube-rbac-proxy",
nodeExporter: "quay.io/prometheus/node-exporter",
prometheusOperator: "quay.io/coreos/prometheus-operator",
},
prometheus+:: {
names: 'k8s',
replicas: 2,
rules: {},
},
alertmanager+:: {
name: 'main',
config: |||
global:
resolve_timeout: 5m
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'null'
routes:
- match:
alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
|||,
replicas: 3,
},
kubeStateMetrics+:: {
collectors: '', // empty string gets a default set
scrapeInterval: '30s',
scrapeTimeout: '30s',
baseCPU: '100m',
baseMemory: '150Mi',
},
nodeExporter+:: {
port: 9100,
},
},
}
```
The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana), but needed configuration can be customized from the same top level `values` field. For example to allow anonymous access to grafana, add the following `values` section:
The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana), but needed configuration can be customized from the same top level `_config` field. For example to allow anonymous access to grafana, add the following `_config` section:
```
grafana+:: {
config: { // http://docs.grafana.org/installation/configuration/
@@ -359,28 +390,57 @@ Jsonnet is a turing complete language, any logic can be reflected in it. It also
### Cluster Creation Tools
A common example is that not all Kubernetes clusters are created exactly the same way, meaning the configuration to monitor them may be slightly different. For the following clusters there are mixins available to easily configure them:
A common example is that not all Kubernetes clusters are created exactly the same way, meaning the configuration to monitor them may be slightly different. For [kubeadm](examples/jsonnet-snippets/kubeadm.jsonnet), [bootkube](examples/jsonnet-snippets/bootkube.jsonnet), [kops](examples/jsonnet-snippets/kops.jsonnet) and [kubespray](examples/jsonnet-snippets/kubespray.jsonnet) clusters there are mixins available to easily configure these:
* aws
* bootkube
* eks
* gke
* kops-coredns
* kubeadm
* kubespray
kubeadm:
These mixins are selectable via the `platform` field of kubePrometheus:
[embedmd]:# (examples/jsonnet-snippets/platform.jsonnet)
[embedmd]:# (examples/jsonnet-snippets/kubeadm.jsonnet)
```jsonnet
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
kubePrometheus+: {
platform: 'example-platform',
},
},
}
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet')
```
bootkube:
[embedmd]:# (examples/jsonnet-snippets/bootkube.jsonnet)
```jsonnet
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-bootkube.libsonnet')
```
kops:
[embedmd]:# (examples/jsonnet-snippets/kops.jsonnet)
```jsonnet
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops.libsonnet')
```
kops with CoreDNS:
If your kops cluster is using CoreDNS, there is an additional mixin to import.
[embedmd]:# (examples/jsonnet-snippets/kops-coredns.jsonnet)
```jsonnet
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops-coredns.libsonnet')
```
kubespray:
[embedmd]:# (examples/jsonnet-snippets/kubespray.jsonnet)
```jsonnet
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubespray.libsonnet')
```
kube-aws:
[embedmd]:# (examples/jsonnet-snippets/kube-aws.jsonnet)
```jsonnet
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kube-aws.libsonnet')
```
### Internal Registry
@@ -406,12 +466,10 @@ Then to generate manifests with `internal-registry.com/organization`, use the `w
[embedmd]:# (examples/internal-registry.jsonnet)
```jsonnet
local mixin = import 'kube-prometheus/addons/config-mixins.libsonnet';
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local mixin = import 'kube-prometheus/kube-prometheus-config-mixins.libsonnet';
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
} + mixin.withImageRepository('internal-registry.com/organization');
@@ -430,8 +488,8 @@ Another mixin that may be useful for exploring the stack is to expose the UIs of
[embedmd]:# (examples/jsonnet-snippets/node-ports.jsonnet)
```jsonnet
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/node-ports.libsonnet')
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-node-ports.libsonnet')
```
### Prometheus Object Name
@@ -440,7 +498,7 @@ To give another customization example, the name of the `Prometheus` object provi
[embedmd]:# (examples/prometheus-name-override.jsonnet)
```jsonnet
((import 'kube-prometheus/main.libsonnet') + {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
prometheus+: {
prometheus+: {
metadata+: {
@@ -457,25 +515,25 @@ Standard Kubernetes manifests are all written using [ksonnet-lib](https://github
[embedmd]:# (examples/ksonnet-example.jsonnet)
```jsonnet
((import 'kube-prometheus/main.libsonnet') + {
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local daemonset = k.apps.v1beta2.daemonSet;
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
nodeExporter+: {
daemonset+: {
metadata+: {
namespace: 'my-custom-namespace',
},
},
daemonset+:
daemonset.mixin.metadata.withNamespace('my-custom-namespace'),
},
}).nodeExporter.daemonset
```
### Alertmanager configuration
The Alertmanager configuration is located in the `values.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field.
The Alertmanager configuration is located in the `_config.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field.
[embedmd]:# (examples/alertmanager-config.jsonnet)
```jsonnet
((import 'kube-prometheus/main.libsonnet') + {
values+:: {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
alertmanager+: {
config: |||
global:
@@ -502,8 +560,8 @@ In the above example the configuration has been inlined, but can just as well be
[embedmd]:# (examples/alertmanager-config-external.jsonnet)
```jsonnet
((import 'kube-prometheus/main.libsonnet') + {
values+:: {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
alertmanager+: {
config: importstr 'alertmanager-config.yaml',
},
@@ -513,17 +571,15 @@ In the above example the configuration has been inlined, but can just as well be
### Adding additional namespaces to monitor
In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$.values.namespace`. This is specified in `$.values.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces:
In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$._config.namespace`. This is specified in `$._config.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces:
[embedmd]:# (examples/additional-namespaces.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
prometheus+: {
prometheus+:: {
namespaces+: ['my-namespace', 'my-second-namespace'],
},
},
@@ -548,16 +604,14 @@ You can define ServiceMonitor resources in your `jsonnet` spec. See the snippet
[embedmd]:# (examples/additional-namespaces-servicemonitor.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
prometheus+:: {
namespaces+: ['my-namespace', 'my-second-namespace'],
},
},
exampleApplication: {
prometheus+:: {
serviceMonitorMyNamespace: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
@@ -583,36 +637,6 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
```
> NOTE: make sure your service resources have the right labels (eg. `'app': 'myapp'`) applied. Prometheus uses kubernetes labels to discover resources inside the namespaces.
### Monitoring all namespaces
In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them.
[embedmd]:# (examples/all-namespaces.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/all-namespaces.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
prometheus+: {
namespaces: [],
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
@@ -622,9 +646,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
> NOTE: This configuration can potentially make your cluster insecure especially in a multi-tenant cluster. This is because this gives Prometheus visibility over the whole cluster which might not be expected in a scenario when certain namespaces are locked down for security reasons.
Proceed with [creating ServiceMonitors for the services in the namespaces](#defining-the-servicemonitor-for-each-additional-namespace) you actually want to monitor
> NOTE: make sure your service resources has the right labels (eg. `'app': 'myapp'`) applied. Prometheus use kubernetes labels to discovery resources inside the namespaces.
### Static etcd configuration
@@ -635,51 +657,11 @@ In order to configure a static etcd cluster to scrape there is a simple [kube-pr
### Pod Anti-Affinity
To prevent `Prometheus` and `Alertmanager` instances from being deployed onto the same node when
possible, one can include the [kube-prometheus-anti-affinity.libsonnet](jsonnet/kube-prometheus/addons/anti-affinity.libsonnet) mixin:
possible, one can include the [kube-prometheus-anti-affinity.libsonnet](jsonnet/kube-prometheus/kube-prometheus-anti-affinity.libsonnet) mixin:
[embedmd]:# (examples/anti-affinity.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/anti-affinity.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Stripping container resource limits
Sometimes in small clusters, the CPU/memory limits can get high enough for alerts to be fired continuously. To prevent this, one can strip off the predefined limits.
To do that, one can import the following mixin
[embedmd]:# (examples/strip-limits.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/strip-limits.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet')
```
### Customizing Prometheus alerting/recording rules and Grafana dashboards
@@ -690,48 +672,12 @@ See [developing Prometheus rules and Grafana dashboards](docs/developing-prometh
See [exposing Prometheus/Alertmanager/Grafana](docs/exposing-prometheus-alertmanager-grafana-ingress.md) guide.
### Setting up a blackbox exporter
```jsonnet
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
// ... all necessary mixins ...
{
values+:: {
// ... configuration for other features ...
blackboxExporter+:: {
modules+:: {
tls_connect: {
prober: 'tcp',
tcp: {
tls: true
}
}
}
}
}
};
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
// ... other rendering blocks ...
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) }
```
Then describe the actual blackbox checks you want to run using `Probe` resources. Specify `blackbox-exporter.<namespace>.svc.cluster.local:9115` as the `spec.prober.url` field of the `Probe` resource.
See the [blackbox exporter guide](docs/blackbox-exporter.md) for the list of configurable options and a complete example.
## Minikube Example
To use an easy to reproduce example, see [minikube.jsonnet](examples/minikube.jsonnet), which uses the minikube setup as demonstrated in [Prerequisites](#prerequisites). Because we would like easy access to our Prometheus, Alertmanager and Grafana UIs, `minikube.jsonnet` exposes the services as NodePort type services.
## Continuous Delivery
Working examples of use with continuous delivery tools are found in examples/continuous-delivery.
## Troubleshooting
See the general [guidelines](docs/community-support.md) for getting support from the community.
### Error retrieving kubelet metrics
Should the Prometheus `/targets` page show kubelet targets, but not able to successfully scrape the metrics, then most likely it is a problem with the authentication and authorization setup of the kubelets.
@@ -757,7 +703,7 @@ resources. One driver for more resource needs, is a high number of
namespaces. There may be others.
kube-state-metrics resource allocation is managed by
[addon-resizer](https://github.com/kubernetes/autoscaler/tree/main/addon-resizer/nanny)
[addon-resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer/nanny)
You can control it's parameters by setting variables in the
config. They default to:
@@ -782,7 +728,3 @@ the following process:
3. Update the pinned kube-prometheus dependency in `jsonnetfile.lock.json`: `jb update`
3. Generate dependent `*.yaml` files: `make generate`
4. Commit the generated changes.
## License
Apache License 2.0, see [LICENSE](https://github.com/prometheus-operator/kube-prometheus/blob/main/LICENSE).

View File

@@ -7,31 +7,23 @@ One fatal issue that can occur is that you run out of IP addresses in your eks c
You can monitor the `awscni` using kube-promethus with :
[embedmd]:# (../examples/eks-cni-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
kubePrometheus+: {
platform: 'eks',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-eks.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
kubernetesControlPlane+: {
prometheusRuleEksCNI+: {
spec+: {
groups+: [
prometheusRules+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
record: 'aws_eks_available_ip',
expr: 'sum by(instance) (awscni_total_ip_addresses) - sum by(instance) (awscni_assigned_ip_addresses) < 10',
},
],
record: 'aws_eks_available_ip',
expr: 'sum by(instance) (awscni_total_ip_addresses) - sum by(instance) (awscni_assigned_ip_addresses) < 10',
},
],
},
},
],
},
};

View File

@@ -1,97 +0,0 @@
---
title: "Blackbox Exporter"
description: "Generated API docs for the Prometheus Operator"
lead: "This Document documents the types introduced by the Prometheus Operator to be consumed by users."
date: 2021-03-08T08:49:31+00:00
lastmod: 2021-03-08T08:49:31+00:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 630
toc: true
---
# Setting up a blackbox exporter
The `prometheus-operator` defines a `Probe` resource type that can be used to describe blackbox checks. To execute these, a separate component called [`blackbox_exporter`](https://github.com/prometheus/blackbox_exporter) has to be deployed, which can be scraped to retrieve the results of these checks. You can use `kube-prometheus` to set up such a blackbox exporter within your Kubernetes cluster.
## Adding blackbox exporter manifests to an existing `kube-prometheus` configuration
1. Override blackbox-related configuration parameters as needed.
2. Add the following to the list of renderers to render the blackbox exporter manifests:
```
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) }
```
## Configuration parameters influencing the blackbox exporter
* `_config.namespace`: the namespace where the various generated resources (`ConfigMap`, `Deployment`, `Service`, `ServiceAccount` and `ServiceMonitor`) will reside. This does not affect where you can place `Probe` objects; that is determined by the configuration of the `Prometheus` resource. This option is shared with other `kube-prometheus` components; defaults to `default`.
* `_config.imageRepos.blackboxExporter`: the name of the blackbox exporter image to deploy. Defaults to `quay.io/prometheus/blackbox-exporter`.
* `_config.versions.blackboxExporter`: the tag of the blackbox exporter image to deploy. Defaults to the version `kube-prometheus` was tested with.
* `_config.imageRepos.configmapReloader`: the name of the ConfigMap reloader image to deploy. Defaults to `jimmidyson/configmap-reload`.
* `_config.versions.configmapReloader`: the tag of the ConfigMap reloader image to deploy. Defaults to the version `kube-prometheus` was tested with.
* `_config.resources.blackbox-exporter.requests`: the requested resources; this is used for each container. Defaults to `10m` CPU and `20Mi` RAM. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for details.
* `_config.resources.blackbox-exporter.limits`: the resource limits; this is used for each container. Defaults to `20m` CPU and `40Mi` RAM. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for details.
* `_config.blackboxExporter.port`: the exposed HTTPS port of the exporter. This is what Prometheus can scrape for metrics related to the blackbox exporter itself. Defaults to `9115`.
* `_config.blackboxExporter.internalPort`: the internal plaintext port of the exporter. Prometheus scrapes configured via `Probe` objects cannot access the HTTPS port right now, so you have to specify this port in the `url` field. Defaults to `19115`.
* `_config.blackboxExporter.replicas`: the number of exporter replicas to be deployed. Defaults to `1`.
* `_config.blackboxExporter.matchLabels`: map of the labels to be used to select resources belonging to the instance deployed. Defaults to `{ 'app.kubernetes.io/name': 'blackbox-exporter' }`
* `_config.blackboxExporter.assignLabels`: map of the labels applied to components of the instance deployed. Defaults to all the labels included in the `matchLabels` option, and additionally `app.kubernetes.io/version` is set to the version of the blackbox exporter.
* `_config.blackboxExporter.modules`: the modules available in the blackbox exporter installation, i.e. the types of checks it can perform. The default value includes most of the modules defined in the default blackbox exporter configuration: `http_2xx`, `http_post_2xx`, `tcp_connect`, `pop3s_banner`, `ssh_banner`, and `irc_banner`. `icmp` is omitted so the exporter can be run with minimum privileges, but you can add it back if needed - see the example below. See https://github.com/prometheus/blackbox_exporter/blob/master/CONFIGURATION.md for the configuration format, except you have to use JSON instead of YAML here.
* `_config.blackboxExporter.privileged`: whether the `blackbox-exporter` container should be running as non-root (`false`) or root with heavily-restricted capability set (`true`). Defaults to `true` if you have any ICMP modules defined (which need the extra permissions) and `false` otherwise.
## Complete example
```jsonnet
local kp =
(import 'kube-prometheus/kube-prometheus.libsonnet') +
{
_config+:: {
namespace: 'monitoring',
blackboxExporter+:: {
modules+:: {
icmp: {
prober: 'icmp',
},
},
},
},
};
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
After installing the generated manifests, you can create `Probe` resources, for example:
```yaml
kind: Probe
apiVersion: monitoring.coreos.com/v1
metadata:
name: example-com-website
namespace: monitoring
spec:
interval: 60s
module: http_2xx
prober:
url: blackbox-exporter.monitoring.svc.cluster.local:19115
targets:
staticConfig:
static:
- http://example.com
- https://example.com
```

View File

@@ -1,84 +0,0 @@
# Community support
For bugs, you can use the GitHub [issue tracker](https://github.com/prometheus-operator/kube-prometheus/issues/new/choose).
For questions, you can use the GitHub [discussions forum](https://github.com/prometheus-operator/kube-prometheus/discussions).
Many of the `kube-prometheus` project's contributors and users can also be found on the #prometheus-operator channel of the [Kubernetes Slack][Kubernetes Slack].
`kube-prometheus` is the aggregation of many projects that all have different
channels to reach out for help and support. This community strives at
supporting all users and you should never be afraid of asking us first. However
if your request relates specifically to one of the projects listed below, it is
often more efficient to reach out to the project directly. If you are unsure,
please feel free to open an issue in this repository and we will redirect you
if applicable.
## prometheus-operator
For documentation, check the project's [documentation directory](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation).
For questions, use the #prometheus-operator channel on the [Kubernetes Slack][Kubernetes Slack].
For bugs, use the GitHub [issue tracker](https://github.com/prometheus-operator/prometheus-operator/issues/new/choose).
## Prometheus, Alertmanager, node_exporter
For documentation, check the Prometheus [online docs](https://prometheus.io/docs/). There is a
[section](https://prometheus.io/docs/introduction/media/) with links to blog
posts, recorded talks and presentations. This [repository](https://github.com/roaldnefs/awesome-prometheus)
(not affiliated to the Prometheus project) has also a list of curated resources
related to the Prometheus ecosystem.
For questions, see the Prometheus [community page](https://prometheus.io/community/) for the various channels.
There is also a #prometheus channel on the [CNCF Slack][CNCF Slack].
## kube-state-metrics
For documentation, see the project's [docs directory](https://github.com/kubernetes/kube-state-metrics/tree/master/docs).
For questions, use the #kube-state-metrics channel on the [Kubernetes Slack][Kubernetes Slack].
For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kube-state-metrics/issues/new/choose).
## Kubernetes
For documentation, check the [Kubernetes docs](https://kubernetes.io/docs/home/).
For questions, use the [community forums](https://discuss.kubernetes.io/) and the [Kubernetes Slack][Kubernetes Slack]. Check also the [community page](https://kubernetes.io/community/#discuss).
For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kubernetes/issues/new/choose).
## Prometheus adapter
For documentation, check the project's [README](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/README.md).
For questions, use the #sig-instrumentation channel on the [Kubernetes Slack][Kubernetes Slack].
For bugs, use the GitHub [issue tracker](https://github.com/DirectXMan12/k8s-prometheus-adapter/issues/new).
## Grafana
For documentation, check the [Grafana docs](https://grafana.com/docs/grafana/latest/).
For questions, use the [community forums](https://community.grafana.com/).
For bugs, use the GitHub [issue tracker](https://github.com/grafana/grafana/issues/new/choose).
## kubernetes-mixin
For documentation, check the project's [README](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/README.md).
For questions, use #monitoring-mixins channel on the [Kubernetes Slack][Kubernetes Slack].
For bugs, use the GitHub [issue tracker](https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/new).
## Jsonnet
For documentation, check the [Jsonnet](https://jsonnet.org/) website.
For questions, use the [mailing list](https://groups.google.com/forum/#!forum/jsonnet).
[Kubernetes Slack]: https://slack.k8s.io/
[CNCF Slack]: https://slack.cncf.io/

View File

@@ -1,19 +0,0 @@
---
title: "Deploy to kind"
description: "Deploy kube-prometheus to Kubernets kind."
lead: "Deploy kube-prometheus to Kubernets kind."
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 500
toc: true
---
---
Time to explain how!
Your chance of [**contributing**](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/deploy-kind.md)!

View File

@@ -1,16 +1,4 @@
---
title: "Prometheus Rules and Grafana Dashboards"
description: "Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus"
lead: "Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus"
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 650
toc: true
---
# Developing Prometheus Rules and Grafana Dashboards
`kube-prometheus` ships with a set of default [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [Grafana](http://grafana.com/) dashboards. At some point one might like to extend them, the purpose of this document is to explain how to do this.
@@ -23,39 +11,33 @@ As a basis, all examples in this guide are based on the base example of the kube
[embedmd]:# (../example.jsonnet)
```jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-custom-metrics.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
_config+:: {
namespace: 'monitoring',
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
## Prometheus rules
@@ -70,40 +52,28 @@ The format is exactly the Prometheus format, so there should be no changes neces
[embedmd]:# (../examples/prometheus-additional-alert-rule-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: [
prometheusAlerts+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
alert: 'ExampleAlert',
expr: 'vector(1)',
labels: {
severity: 'warning',
},
annotations: {
description: 'This is an example alert.',
},
},
],
alert: 'Watchdog',
expr: 'vector(1)',
labels: {
severity: 'none',
},
annotations: {
description: 'This is a Watchdog meant to ensure that the entire alerting pipeline is functional.',
},
},
],
},
},
],
},
};
@@ -114,8 +84,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Recording rules
@@ -126,34 +95,22 @@ In order to add a recording rule, simply do the same with the `prometheusRules`
[embedmd]:# (../examples/prometheus-additional-recording-rule-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: [
prometheusRules+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
record: 'some_recording_rule_name',
expr: 'vector(1)',
},
],
record: 'some_recording_rule_name',
expr: 'vector(1)',
},
],
},
},
],
},
};
@@ -164,8 +121,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Pre-rendered rules
@@ -186,24 +142,12 @@ Then import it in jsonnet:
[embedmd]:# (../examples/prometheus-additional-rendered-rule-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: (import 'existingrule.json').groups,
},
},
prometheusAlerts+:: {
groups+: (import 'existingrule.json').groups,
},
};
@@ -214,8 +158,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Changing default rules
@@ -305,37 +248,35 @@ local prometheus = grafana.prometheus;
local template = grafana.template;
local graphPanel = grafana.graphPanel;
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
dashboards+:: {
'my-dashboard.json':
dashboard.new('My Dashboard')
.addTemplate(
{
current: {
text: 'Prometheus',
value: 'Prometheus',
},
hide: 0,
label: null,
name: 'datasource',
options: [],
query: 'prometheus',
refresh: 1,
regex: '',
type: 'datasource',
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafana+:: {
dashboards+:: {
'my-dashboard.json':
dashboard.new('My Dashboard')
.addTemplate(
{
current: {
text: 'Prometheus',
value: 'Prometheus',
},
)
.addRow(
row.new()
.addPanel(graphPanel.new('My Panel', span=6, datasource='$datasource')
.addTarget(prometheus.target('vector(1)')))
),
},
hide: 0,
label: null,
name: 'datasource',
options: [],
query: 'prometheus',
refresh: 1,
regex: '',
type: 'datasource',
},
)
.addRow(
row.new()
.addPanel(graphPanel.new('My Panel', span=6, datasource='$datasource')
.addTarget(prometheus.target('vector(1)')))
),
},
},
};
@@ -355,15 +296,16 @@ As jsonnet is a superset of json, the jsonnet `import` function can be used to i
[embedmd]:# (../examples/grafana-additional-rendered-dashboard-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
dashboards+:: { // use this method to import your dashboards to Grafana
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafanaDashboards+:: { // monitoring-mixin compatibility
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
grafana+:: {
dashboards+:: { // use this method to import your dashboards to Grafana
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
},
};
@@ -380,15 +322,13 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
In case you have lots of json dashboard exported out from grafana UI the above approach is going to take lots of time to improve performance we can use `rawDashboards` field and provide it's value as json string by using `importstr`
[embedmd]:# (../examples/grafana-additional-rendered-dashboard-example-2.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
rawDashboards+:: {
'my-dashboard.json': (importstr 'example-grafana-dashboard.json'),
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafana+:: {
rawDashboards+:: {
'my-dashboard.json': (importstr 'example-grafana-dashboard.json'),
},
},
};
@@ -401,81 +341,3 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Mixins
Kube-prometheus comes with a couple of default mixins as the Kubernetes-mixin and the Node-exporter mixin, however there [are many more mixins](https://monitoring.mixins.dev/). To use other mixins Kube-prometheus has a jsonnet library for creating a Kubernetes PrometheusRule CRD and Grafana dashboards from a mixin. Below is an example of creating a mixin object that has Prometheus rules and Grafana dashboards:
```jsonnet
// Import the library function for adding mixins
local addMixin = (import 'kube-prometheus/lib/mixin.libsonnet');
// Create your mixin
local myMixin = addMixin({
name: 'myMixin',
mixin: import 'my-mixin/mixin.libsonnet',
});
```
The myMixin object will have two objects - `prometheusRules` and `grafanaDashboards`. The `grafanaDashboards` object will be needed to be added to the `dashboards` field as in the example below:
```jsonnet
values+:: {
grafana+:: {
dashboards+:: myMixin.grafanaDashboards
```
The `prometheusRules` object is a PrometheusRule Kubernetes CRD and it should be defined as its own jsonnet object. If you define multiple mixins in a single jsonnet object there is a possibility that they will overwrite each others' configuration and there will be unintended effects. Therefore, use the `prometheusRules` object as its own jsonnet object:
```jsonnet
...
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ 'external-mixins/my-mixin-prometheus-rules': myMixin.prometheusRules } // one object for each mixin
```
As mentioned above each mixin is configurable and you would configure the mixin as in the example below:
```jsonnet
local myMixin = addMixin({
name: 'myMixin',
mixin: (import 'my-mixin/mixin.libsonnet') + {
_config+:: {
myMixinSelector: 'my-selector',
interval: '30d', // example
},
},
});
```
The library has also two optional parameters - the namespace for the `PrometheusRule` CRD and the dashboard folder for the Grafana dashboards. The below example shows how to use both:
```jsonnet
local myMixin = addMixin({
name: 'myMixin',
namespace: 'prometheus', // default is monitoring
dashboardFolder: 'Observability',
mixin: (import 'my-mixin/mixin.libsonnet') + {
_config+:: {
myMixinSelector: 'my-selector',
interval: '30d', // example
},
},
});
```
The created `prometheusRules` object will have the metadata field `namespace` added and the usage will remain the same. However, the `grafanaDasboards` will be added to the `folderDashboards` field instead of the `dashboards` field as shown in the example below:
```jsonnet
values+:: {
grafana+:: {
folderDashboards+:: {
Kubernetes: {
...
},
Misc: {
'grafana-home.json': import 'dashboards/misc/grafana-home.json',
},
} + myMixin.grafanaDashboards
```

View File

@@ -1,24 +1,12 @@
---
title: "Expose via Ingress"
description: "How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana."
lead: "How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana."
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 500
toc: true
---
# Exposing Prometheus, Alertmanager and Grafana UIs via Ingress
In order to access the web interfaces via the Internet [Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a popular option. This guide explains, how Kubernetes Ingress can be setup, in order to expose the Prometheus, Alertmanager and Grafana UIs, that are included in the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project.
In order to access the web interfaces via the Internet [Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a popular option. This guide explains, how Kubernetes Ingress can be setup, in order to expose the Prometheus, Alertmanager and Grafana UIs, that are included in the [kube-prometheus](https://github.com/coreos/kube-prometheus) project.
Note: before continuing, it is recommended to first get familiar with the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) stack by itself.
Note: before continuing, it is recommended to first get familiar with the [kube-prometheus](https://github.com/coreos/kube-prometheus) stack by itself.
## Prerequisites
Apart from a running Kubernetes cluster with a running [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) stack, a Kubernetes Ingress controller must be installed and functional. This guide was tested with the [nginx-ingress-controller](https://github.com/kubernetes/ingress-nginx). If you wish to reproduce the exact result in as depicted in this guide we recommend using the nginx-ingress-controller.
Apart from a running Kubernetes cluster with a running [kube-prometheus](https://github.com/coreos/kube-prometheus) stack, a Kubernetes Ingress controller must be installed and functional. This guide was tested with the [nginx-ingress-controller](https://github.com/kubernetes/ingress-nginx). If you wish to reproduce the exact result in as depicted in this guide we recommend using the nginx-ingress-controller.
## Setting up Ingress
@@ -39,6 +27,13 @@ In order to use this a secret needs to be created containing the name of the `ht
Also, the applications provide external links to themselves in alerts and various places. When an ingress is used in front of the applications these links need to be based on the external URL's. This can be configured for each application in jsonnet.
```jsonnet
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local secret = k.core.v1.secret;
local ingress = k.extensions.v1beta1.ingress;
local ingressTls = ingress.mixin.spec.tlsType;
local ingressRule = ingress.mixin.spec.rulesType;
local httpIngressPath = ingressRule.mixin.http.pathsType;
local kp =
(import 'kube-prometheus/kube-prometheus.libsonnet') +
{
@@ -53,46 +48,30 @@ local kp =
},
},
ingress+:: {
'prometheus-k8s': {
apiVersion: 'networking.k8s.io/v1',
kind: 'Ingress',
metadata: {
name: $.prometheus.prometheus.metadata.name,
namespace: $.prometheus.prometheus.metadata.namespace,
annotations: {
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
},
},
spec: {
rules: [{
host: 'prometheus.example.com',
http: {
paths: [{
backend: {
service: {
name: $.prometheus.service.metadata.name,
port: 'web',
},
},
}],
},
}],
},
'prometheus-k8s':
ingress.new() +
ingress.mixin.metadata.withName($.prometheus.prometheus.metadata.name) +
ingress.mixin.metadata.withNamespace($.prometheus.prometheus.metadata.namespace) +
ingress.mixin.metadata.withAnnotations({
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
}) +
ingress.mixin.spec.withRules(
ingressRule.new() +
ingressRule.withHost('prometheus.example.com') +
ingressRule.mixin.http.withPaths(
httpIngressPath.new() +
httpIngressPath.mixin.backend.withServiceName($.prometheus.service.metadata.name) +
httpIngressPath.mixin.backend.withServicePort('web')
),
),
},
} + {
ingress+:: {
'basic-auth-secret': {
apiVersion: 'v1',
kind: 'Secret',
metadata: {
name: 'basic-auth',
namespace: $._config.namespace,
},
data: { auth: std.base64(importstr 'auth') },
type: 'Opaque',
},
'basic-auth-secret':
secret.new('basic-auth', { auth: std.base64(importstr 'auth') }) +
secret.mixin.metadata.withNamespace($._config.namespace),
},
};

View File

@@ -1,22 +1,15 @@
---
title: "Deploy to kubeadm"
description: "Deploy kube-prometheus to Kubernets kubeadm."
lead: "Deploy kube-prometheus to Kubernets kubeadm."
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 500
toc: true
---
<br>
<div class="alert alert-info" role="alert">
<i class="fa fa-exclamation-triangle"></i><b> Note:</b> Starting with v0.12.0, Prometheus Operator requires use of Kubernetes v1.7.x and up.
</div>
The [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) tool is linked by Kubernetes as the offical way to deploy and manage self-hosted clusters. kubeadm does a lot of heavy lifting by automatically configuring your Kubernetes cluster with some common options. This guide is intended to show you how to deploy Prometheus, Prometheus Operator and Kube Prometheus to get you started monitoring your cluster that was deployed with kubeadm.
# Kube Prometheus on Kubeadm
The [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) tool is linked by Kubernetes as the offical way to deploy and manage self-hosted clusters. Kubeadm does a lot of heavy lifting by automatically configuring your Kubernetes cluster with some common options. This guide is intended to show you how to deploy Prometheus, Prometheus Operator and Kube Prometheus to get you started monitoring your cluster that was deployed with Kubeadm.
This guide assumes you have a basic understanding of how to use the functionality the Prometheus Operator implements. If you haven't yet, we recommend reading through the [getting started guide](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md) as well as the [alerting guide](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md).
## kubeadm Pre-requisites
## Kubeadm Pre-requisites
This guide assumes you have some familiarity with `kubeadm` or at least have deployed a cluster using `kubeadm`. By default, `kubeadm` does not expose two of the services that we will be monitoring. Therefore, in order to get the most out of the `kube-prometheus` package, we need to make some quick tweaks to the Kubernetes cluster. Since we will be monitoring the `kube-controller-manager` and `kube-scheduler`, we must expose them to the cluster.

View File

@@ -1,83 +0,0 @@
# Migration guide from release-0.7 and earlier
## Why?
Thanks to our community we identified a lot of short-commings of previous design, varying from issues with global state to UX problems. Hoping to fix at least part of those issues we decided to do a complete refactor of the codebase.
## Overview
### Breaking Changes
- global `_config` object is removed and the new `values` object is a partial replacement
- `imageRepos` field was removed and the project no longer tries to compose image strings. Use `$.values.common.images` to override default images.
- prometheus alerting and recording rules are split into multiple `PrometheusRule` objects
- kubernetes control plane ServiceMonitors and Services are now part of the new `kubernetesControlPlane` top-level object instead of `prometheus` object
- `jsonnet/kube-prometheus/kube-prometheus.libsonnet` file was renamed to `jsonnet/kube-prometheus/main.libsonnet` and slimmed down to bare minimum
- `jsonnet/kube-prometheus/kube-prometheus*-.libsonnet` files were move either to `jsonnet/kube-prometheus/addons/` or `jsonnet/kube-prometheus/platforms/` depending on the feature they provided
- all component libraries are now function- and not object-based
- monitoring-mixins are included inside each component and not globally. `prometheusRules`, `prometheusAlerts`, and `grafanaDashboards` are accessible only per component via `mixin` object (ex. `$.alertmanager.mixin.prometheusAlerts`)
- default repository branch changed from `master` to `main`
- labels on resources have changes, `kubectl apply` will not work correctly due to those field being immutable. Deleting the resource first before applying is a workaround if you are using the kubectl CLI. (This only applies to `Deployments` and `DaemonSets`.)
### New Features
- concept of `addons`, `components`, and `platforms` was introduced
- all main `components` are now represented internally by a function with default values and required parameters (see #Component-configuration for more information)
- `$.values` holds main configuration parameters and should be used to set basic stack configuration.
- common parameters across all `components` are stored now in `$.values.common`
- removed dependency on deprecated ksonnet library
## Details
### Components, Addons, Platforms
Those concepts were already present in the repository but it wasn't clear which file is holding what. After refactoring we categorized jsonnet code into 3 buckets and put them into separate directories:
- `components` - main building blocks for kube-prometheus, written as functions responsible for creating multiple objects representing kubernetes manifests. For example all objects for node_exporter deployment are bundled in `components/node_exporter.libsonnet` library
- `addons` - everything that can enhance kube-prometheus deployment. Those are small snippets of code adding a small feature, for example adding anti-affinity to pods via [`addons/anti-affinity.libsonnet`][antiaffinity]. Addons are meant to be used in object-oriented way like `local kp = (import 'kube-prometheus/main.libsonnet') + (import 'kube-prometheus/addons/all-namespaces.libsonnet')`
- `platforms` - currently those are `addons` specialized to allow deploying kube-prometheus project on a specific platform.
### Component configuration
Refactoring main components to use functions allowed us to define APIs for said components. Each function has a default set of parameters that can be overridden or that are required to be set by a user. Those default parameters are represented in each component by `defaults` map at the top of each library file, for example in [`node_exporter.libsonnet`][node_exporter_defaults_example].
This API is meant to ease the use of kube-prometheus as parameters can be passed from a JSON file and don't need to be in jsonnet format. However, if you need to modify particular parts of the stack, jsonnet allows you to do this and we are also not restricting such access in any way. An example of such modifications can be seen in any of our `addons`, like the [`addons/anti-affinity.libsonnet`][antiaffinity] one.
### Mixin integration
Previously kube-prometheus project joined all mixins on a global level. However with a wider adoption of monitoring mixins this turned out to be a problem, especially apparent when two mixins started to use the same configuration field for different purposes. To fix this we moved all mixins into their own respective components:
- alertmanager mixin -> `alertmanager.libsonnet`
- kubernetes mixin -> `k8s-control-plane.libsonnet`
- kube-state-metrics mixin -> `kube-state-metrics.libsonnet`
- node_exporter mixin -> `node_exporter.libsonnet`
- prometheus and thanos sidecar mixins -> `prometheus.libsonnet`
- prometheus-operator mixin -> `prometheus-operator.libsonnet`
- kube-prometheus alerts and rules -> `components/mixin/custom.libsonnet`
> etcd mixin is a special case as we add it inside an `addon` in `addons/static-etcd.libsonnet`
This results in creating multiple `PrometheusRule` objects instead of having one giant object as before. It also means each mixin is configured separately and accessing mixin objects is done via `$.<component>.mixin`.
## Examples
All examples from `examples/` directory were adapted to the new codebase. [Please take a look at them for guideance](https://github.com/prometheus-operator/kube-prometheus/tree/main/examples)
## Advanced usage examples
For more advanced usage examples you can take a look at those two, open to public, implementations:
- [thaum-xyz/ankhmorpork][thaum] - extending kube-prometheus to adapt to a required environment
- [openshift/cluster-monitoring-operator][openshift] - using kube-prometheus components as standalone libraries to build a custom solution
## Final note
Refactoring was a huge undertaking and possibly this document didn't describe in enough detail how to help you with migration to the new stack. If that is the case, please reach out to us by using [GitHub discussions][discussions] feature or directly on [#prometheus-operator kubernetes slack channel][slack].
[antiaffinity]: https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/addons/anti-affinity.libsonnet
[node_exporter_defaults_example]: https://github.com/prometheus-operator/kube-prometheus/blob/1d2a0e275af97948667777739a18b24464480dc8/jsonnet/kube-prometheus/components/node-exporter.libsonnet#L3-L34
[openshift]: https://github.com/openshift/cluster-monitoring-operator/pull/1044
[thaum]: https://github.com/thaum-xyz/ankhmorpork/blob/master/apps/monitoring/jsonnet
[discussions]: https://github.com/prometheus-operator/kube-prometheus/discussions
[slack]: http://slack.k8s.io/

View File

@@ -1,18 +1,5 @@
---
title: "Monitoring external etcd"
description: "This guide will help you monitor an external etcd cluster."
lead: "This guide will help you monitor an external etcd cluster."
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 640
toc: true
---
When the etcd cluster is not hosted inside Kubernetes.
# Monitoring external etcd
This guide will help you monitor an external etcd cluster. When the etcd cluster is not hosted inside Kubernetes.
This is often the case with Kubernetes setups. This approach has been tested with kube-aws but the same principals apply to other tools.
Note that [etcd.jsonnet](../examples/etcd.jsonnet) & [kube-prometheus-static-etcd.libsonnet](../jsonnet/kube-prometheus/kube-prometheus-static-etcd.libsonnet) (which are described by a section of the [Readme](../README.md#static-etcd-configuration)) do the following:

View File

@@ -1,17 +1,4 @@
---
title: "Monitoring other Namespaces"
description: "This guide will help you monitor applications in other Namespaces."
lead: "This guide will help you monitor applications in other Namespaces."
date: 2021-03-08T23:04:32+01:00
draft: false
images: []
menu:
docs:
parent: "kube"
weight: 640
toc: true
---
# Monitoring other Kubernetes Namespaces
This guide will help you monitor applications in other Namespaces. By default the RBAC rules are only enabled for the `Default` and `kube-system` Namespace during Install.
# Setup

View File

@@ -17,42 +17,36 @@ Using kube-prometheus and kubectl you will be able install the following for mon
[embedmd]:# (../examples/weave-net-example.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/weave-net/weave-net.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-weave-net.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
kubernetesControlPlane+: {
prometheusRuleWeaveNet+: {
spec+: {
groups: std.map(
function(group)
if group.name == 'weave-net' then
group {
rules: std.map(
function(rule)
if rule.alert == 'WeaveNetFastDPFlowsLow' then
rule {
expr: 'sum(weave_flows) < 20000',
}
else if rule.alert == 'WeaveNetIPAMUnreachable' then
rule {
expr: 'weave_ipam_unreachable_percentage > 25',
}
else
rule
,
group.rules
),
}
else
group,
super.groups
),
},
},
prometheusAlerts+:: {
groups: std.map(
function(group)
if group.name == 'weave-net' then
group {
rules: std.map(
function(rule)
if rule.alert == 'WeaveNetFastDPFlowsLow' then
rule {
expr: 'sum(weave_flows) < 20000',
}
else if rule.alert == 'WeaveNetIPAMUnreachable' then
rule {
expr: 'weave_ipam_unreachable_percentage > 25',
}
else
rule
,
group.rules
),
}
else
group,
super.groups
),
},
};

View File

@@ -1,22 +0,0 @@
# Windows
The [Windows addon](../examples/windows.jsonnet) adds the dashboards and rules from [kubernetes-monitoring/kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin#dashboards-for-windows-nodes).
Currently, Windows does not support running with [windows_exporter](https://github.com/prometheus-community/windows_exporter) in a pod so this add on uses [additional scrape configuration](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md) to set up a static config to scrape the node ports where windows_exporter is configured.
The addon requires you to specify the node ips and ports where it can find the windows_exporter. See the [full example](../examples/windows.jsonnet) for setup.
```
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/windows.libsonnet') +
{
values+:: {
windowsScrapeConfig+:: {
static_configs: {
targets: ["10.240.0.65:5000", "10.240.0.63:5000"],
},
},
},
};
```

View File

@@ -1,34 +1,28 @@
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-custom-metrics.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
_config+:: {
namespace: 'monitoring',
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,13 +1,11 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
prometheus+:: {
namespaces+: ['my-namespace', 'my-second-namespace'],
},
},
exampleApplication: {
prometheus+:: {
serviceMonitorMyNamespace: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
@@ -39,5 +37,4 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,10 +1,8 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
prometheus+: {
prometheus+:: {
namespaces+: ['my-namespace', 'my-second-namespace'],
},
},

View File

@@ -1,5 +1,5 @@
((import 'kube-prometheus/main.libsonnet') + {
values+:: {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
alertmanager+: {
config: importstr 'alertmanager-config.yaml',
},

View File

@@ -1,5 +1,5 @@
((import 'kube-prometheus/main.libsonnet') + {
values+:: {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
alertmanager+: {
config: |||
global:

View File

@@ -1,7 +1,6 @@
# external alertmanager yaml
global:
resolve_timeout: 10m
slack_api_url: url
route:
group_by: ['job']
group_wait: 30s
@@ -14,17 +13,3 @@ route:
receiver: 'null'
receivers:
- name: 'null'
- name: slack
slack_configs:
- channel: '#alertmanager-testing'
send_resolved: true
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: |-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

View File

@@ -1,19 +0,0 @@
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/all-namespaces.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
prometheus+: {
namespaces: [],
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,16 +0,0 @@
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/anti-affinity.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,9 +0,0 @@
## ArgoCD Example
This is the simplest, working example of an argocd app, the JSON object built is now an array of objects as that is the prefered format for ArgoCD.
Requirements:
**ArgoCD 1.7+**
Follow the vendor generation steps at the root of this repository and generate a `vendored` folder (referenced in `application.yaml`).

View File

@@ -1,25 +0,0 @@
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus
namespace: argocd
annotations:
recipients.argocd-notifications.argoproj.io: "slack:jenkins"
spec:
destination:
namespace: monitoring
server: https://kubernetes.default.svc
project: monitoring
source:
directory:
jsonnet:
libs:
- vendored
recurse: true
path: examples/continuous-delivery/argocd/kube-prometheus
repoURL: git@github.com:prometheus-operator/kube-prometheus.git
targetRevision: HEAD
syncPolicy:
automated: {}
---

View File

@@ -1,22 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
annotations:
recipients.argocd-notifications.argoproj.io: slack:alerts
generation: 1
name: monitoring
namespace: argocd
spec:
clusterResourceWhitelist:
- group: "*"
kind: "*"
description: "Monitoring Stack deployment"
destinations:
- namespace: kube-system
server: https://kubernetes.default.svc
- namespace: default
server: https://kubernetes.default.svc
- namespace: monitoring
server: https://kubernetes.default.svc
sourceRepos:
- git@github.com:prometheus-operator/kube-prometheus.git

View File

@@ -1,14 +0,0 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
},
};
[kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus)] +
[kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator)] +
[kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter)] +
[kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics)] +
[kp.prometheus[name] for name in std.objectFields(kp.prometheus)] +
[kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter)]

View File

@@ -1,28 +1,20 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
kubePrometheus+: {
platform: 'eks',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-eks.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
kubernetesControlPlane+: {
prometheusRuleEksCNI+: {
spec+: {
groups+: [
prometheusRules+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
record: 'aws_eks_available_ip',
expr: 'sum by(instance) (awscni_total_ip_addresses) - sum by(instance) (awscni_assigned_ip_addresses) < 10',
},
],
record: 'aws_eks_available_ip',
expr: 'sum by(instance) (awscni_total_ip_addresses) - sum by(instance) (awscni_assigned_ip_addresses) < 10',
},
],
},
},
],
},
};

View File

@@ -1,9 +1,8 @@
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/static-etcd.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') + {
_config+:: {
namespace: 'monitoring',
etcd+:: {
ips: ['127.0.0.1'],
clientCA: importstr 'etcd-client-ca.crt',

View File

@@ -1,12 +1,10 @@
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/static-etcd.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') + {
_config+:: {
namespace: 'monitoring',
// Reference info: https://github.com/coreos/kube-prometheus/blob/master/README.md#static-etcd-configuration
etcd+: {
etcd+:: {
// Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commas to separate multiple values).
ips: ['127.0.0.1'],

View File

@@ -1 +1 @@
{"groups":[{"name":"example-group","rules":[{"alert":"ExampleAlert","annotations":{"description":"This is an example alert."},"expr":"vector(1)","labels":{"severity":"warning"}}]}]}
{"groups":[{"name":"example-group","rules":[{"alert":"Watchdog","annotations":{"description":"This is a Watchdog meant to ensure that the entire alerting pipeline is functional."},"expr":"vector(1)","labels":{"severity":"none"}}]}]}

View File

@@ -1,9 +1,9 @@
groups:
- name: example-group
rules:
- alert: ExampleAlert
- alert: Watchdog
expr: vector(1)
labels:
severity: "warning"
severity: "none"
annotations:
description: This is an example alert.
description: This is a Watchdog meant to ensure that the entire alerting pipeline is functional.

View File

@@ -5,37 +5,35 @@ local prometheus = grafana.prometheus;
local template = grafana.template;
local graphPanel = grafana.graphPanel;
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
dashboards+:: {
'my-dashboard.json':
dashboard.new('My Dashboard')
.addTemplate(
{
current: {
text: 'Prometheus',
value: 'Prometheus',
},
hide: 0,
label: null,
name: 'datasource',
options: [],
query: 'prometheus',
refresh: 1,
regex: '',
type: 'datasource',
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafana+:: {
dashboards+:: {
'my-dashboard.json':
dashboard.new('My Dashboard')
.addTemplate(
{
current: {
text: 'Prometheus',
value: 'Prometheus',
},
)
.addRow(
row.new()
.addPanel(graphPanel.new('My Panel', span=6, datasource='$datasource')
.addTarget(prometheus.target('vector(1)')))
),
},
hide: 0,
label: null,
name: 'datasource',
options: [],
query: 'prometheus',
refresh: 1,
regex: '',
type: 'datasource',
},
)
.addRow(
row.new()
.addPanel(graphPanel.new('My Panel', span=6, datasource='$datasource')
.addTarget(prometheus.target('vector(1)')))
),
},
},
};

View File

@@ -1,12 +1,10 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
rawDashboards+:: {
'my-dashboard.json': (importstr 'example-grafana-dashboard.json'),
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafana+:: {
rawDashboards+:: {
'my-dashboard.json': (importstr 'example-grafana-dashboard.json'),
},
},
};

View File

@@ -1,12 +1,13 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+:: {
namespace: 'monitoring',
},
grafana+: {
dashboards+:: { // use this method to import your dashboards to Grafana
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
grafanaDashboards+:: { // monitoring-mixin compatibility
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
grafana+:: {
dashboards+:: { // use this method to import your dashboards to Grafana
'my-dashboard.json': (import 'example-grafana-dashboard.json'),
},
},
};

View File

@@ -1,25 +1,15 @@
local ingress(name, namespace, rules) = {
apiVersion: 'networking.k8s.io/v1',
kind: 'Ingress',
metadata: {
name: name,
namespace: namespace,
annotations: {
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
},
},
spec: { rules: rules },
};
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local secret = k.core.v1.secret;
local ingress = k.extensions.v1beta1.ingress;
local ingressTls = ingress.mixin.spec.tlsType;
local ingressRule = ingress.mixin.spec.rulesType;
local httpIngressPath = ingressRule.mixin.http.pathsType;
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
_config+:: {
namespace: 'monitoring',
grafana+:: {
config+: {
sections+: {
@@ -47,71 +37,67 @@ local kp =
},
// Create ingress objects per application
ingress+:: {
'alertmanager-main': ingress(
'alertmanager-main',
$.values.common.namespace,
[{
host: 'alertmanager.example.com',
http: {
paths: [{
backend: {
service: {
name: 'alertmanager-main',
port: 'web',
},
},
}],
},
}]
),
grafana: ingress(
'grafana',
$.values.common.namespace,
[{
host: 'grafana.example.com',
http: {
paths: [{
backend: {
service: {
name: 'grafana',
port: 'http',
},
},
}],
},
}],
),
'prometheus-k8s': ingress(
'prometheus-k8s',
$.values.common.namespace,
[{
host: 'prometheus.example.com',
http: {
paths: [{
backend: {
service: {
name: 'prometheus-k8s',
port: 'web',
},
},
}],
},
}],
),
'alertmanager-main':
ingress.new() +
ingress.mixin.metadata.withName('alertmanager-main') +
ingress.mixin.metadata.withNamespace($._config.namespace) +
ingress.mixin.metadata.withAnnotations({
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
}) +
ingress.mixin.spec.withRules(
ingressRule.new() +
ingressRule.withHost('alertmanager.example.com') +
ingressRule.mixin.http.withPaths(
httpIngressPath.new() +
httpIngressPath.mixin.backend.withServiceName('alertmanager-main') +
httpIngressPath.mixin.backend.withServicePort('web')
),
),
grafana:
ingress.new() +
ingress.mixin.metadata.withName('grafana') +
ingress.mixin.metadata.withNamespace($._config.namespace) +
ingress.mixin.metadata.withAnnotations({
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
}) +
ingress.mixin.spec.withRules(
ingressRule.new() +
ingressRule.withHost('grafana.example.com') +
ingressRule.mixin.http.withPaths(
httpIngressPath.new() +
httpIngressPath.mixin.backend.withServiceName('grafana') +
httpIngressPath.mixin.backend.withServicePort('http')
),
),
'prometheus-k8s':
ingress.new() +
ingress.mixin.metadata.withName('prometheus-k8s') +
ingress.mixin.metadata.withNamespace($._config.namespace) +
ingress.mixin.metadata.withAnnotations({
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Authentication Required',
}) +
ingress.mixin.spec.withRules(
ingressRule.new() +
ingressRule.withHost('prometheus.example.com') +
ingressRule.mixin.http.withPaths(
httpIngressPath.new() +
httpIngressPath.mixin.backend.withServiceName('prometheus-k8s') +
httpIngressPath.mixin.backend.withServicePort('web')
),
),
},
} + {
// Create basic auth secret - replace 'auth' file with your own
ingress+:: {
'basic-auth-secret': {
apiVersion: 'v1',
kind: 'Secret',
metadata: {
name: 'basic-auth',
namespace: $.values.common.namespace,
},
data: { auth: std.base64(importstr 'auth') },
type: 'Opaque',
},
'basic-auth-secret':
secret.new('basic-auth', { auth: std.base64(importstr 'auth') }) +
secret.mixin.metadata.withNamespace($._config.namespace),
},
};

View File

@@ -1,9 +1,7 @@
local mixin = import 'kube-prometheus/addons/config-mixins.libsonnet';
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local mixin = import 'kube-prometheus/kube-prometheus-config-mixins.libsonnet';
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
} + mixin.withImageRepository('internal-registry.com/organization');

View File

@@ -0,0 +1,2 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-bootkube.libsonnet')

View File

@@ -0,0 +1,3 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops-coredns.libsonnet')

View File

@@ -0,0 +1,2 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kops.libsonnet')

View File

@@ -0,0 +1,2 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kube-aws.libsonnet')

View File

@@ -0,0 +1,2 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet')

View File

@@ -0,0 +1,2 @@
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubespray.libsonnet')

View File

@@ -1,2 +1,2 @@
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/node-ports.libsonnet')
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-node-ports.libsonnet')

View File

@@ -1,8 +0,0 @@
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
kubePrometheus+: {
platform: 'example-platform',
},
},
}

View File

@@ -1,9 +1,9 @@
((import 'kube-prometheus/main.libsonnet') + {
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local daemonset = k.apps.v1beta2.daemonSet;
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
nodeExporter+: {
daemonset+: {
metadata+: {
namespace: 'my-custom-namespace',
},
},
daemonset+:
daemonset.mixin.metadata.withNamespace('my-custom-namespace'),
},
}).nodeExporter.daemonset

View File

@@ -1,32 +1,26 @@
local kp =
(import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
(import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
};
local manifests =
// Uncomment line below to enable vertical auto scaling of kube-state-metrics
//{ ['ksm-autoscaler-' + name]: kp.ksmAutoscaler[name] for name in std.objectFields(kp.ksmAutoscaler) } +
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) };
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) };
local kustomizationResourceFile(name) = './manifests/' + name + '.yaml';
local kustomization = {

View File

@@ -1,16 +1,15 @@
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet') +
// Note that NodePort type services is likely not a good idea for your production use case, it is only used for demonstration purposes here.
(import 'kube-prometheus/addons/node-ports.libsonnet') +
(import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
alertmanager+: {
_config+:: {
namespace: 'monitoring',
alertmanager+:: {
config: importstr 'alertmanager-config.yaml',
},
grafana+: {
grafana+:: {
config: { // http://docs.grafana.org/installation/configuration/
sections: {
// Do not require grafana users to login/authenticate
@@ -18,15 +17,12 @@ local kp =
},
},
},
kubePrometheus+: {
platform: 'kubeadm',
},
},
// For simplicity, each of the following values for 'externalUrl':
// * assume that `minikube ip` prints "192.168.99.100"
// * hard-code the NodePort for each app
prometheus+: {
prometheus+:: {
prometheus+: {
// Reference info: https://coreos.com/operators/prometheus/docs/latest/api.html#prometheusspec
spec+: {
@@ -42,7 +38,7 @@ local kp =
},
},
},
alertmanager+: {
alertmanager+:: {
alertmanager+: {
// Reference info: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerspec
spec+: {

View File

@@ -1,23 +0,0 @@
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/podsecuritypolicies.libsonnet');
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
// Add the restricted psp to setup
{ 'setup/0podsecuritypolicy-restricted': kp.restrictedPodSecurityPolicy } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }

View File

@@ -1,37 +1,25 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: [
prometheusAlerts+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
alert: 'ExampleAlert',
expr: 'vector(1)',
labels: {
severity: 'warning',
},
annotations: {
description: 'This is an example alert.',
},
},
],
alert: 'Watchdog',
expr: 'vector(1)',
labels: {
severity: 'none',
},
annotations: {
description: 'This is a Watchdog meant to ensure that the entire alerting pipeline is functional.',
},
},
],
},
},
],
},
};
@@ -42,5 +30,4 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,31 +1,19 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: [
prometheusRules+:: {
groups+: [
{
name: 'example-group',
rules: [
{
name: 'example-group',
rules: [
{
record: 'some_recording_rule_name',
expr: 'vector(1)',
},
],
record: 'some_recording_rule_name',
expr: 'vector(1)',
},
],
},
},
],
},
};
@@ -36,5 +24,4 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,21 +1,9 @@
local kp = (import 'kube-prometheus/main.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
exampleApplication: {
prometheusRuleExample: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: (import 'existingrule.json').groups,
},
},
prometheusAlerts+:: {
groups+: (import 'existingrule.json').groups,
},
};
@@ -26,5 +14,4 @@ local kp = (import 'kube-prometheus/main.libsonnet') + {
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) }
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,4 +1,4 @@
((import 'kube-prometheus/main.libsonnet') + {
((import 'kube-prometheus/kube-prometheus.libsonnet') + {
prometheus+: {
prometheus+: {
metadata+: {

View File

@@ -1,15 +1,21 @@
// Reference info: documentation for https://github.com/ksonnet/ksonnet-lib can be found at http://g.bryan.dev.hepti.center
//
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet'; // https://github.com/ksonnet/ksonnet-lib/blob/master/ksonnet.beta.3/k.libsonnet - imports k8s.libsonnet
// * https://github.com/ksonnet/ksonnet-lib/blob/master/ksonnet.beta.3/k8s.libsonnet defines things such as "persistentVolumeClaim:: {"
//
local pvc = k.core.v1.persistentVolumeClaim; // https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#persistentvolumeclaim-v1-core (defines variable named 'spec' of type 'PersistentVolumeClaimSpec')
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/kube-prometheus.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +
// (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
_config+:: {
namespace: 'monitoring',
},
prometheus+:: {
@@ -26,22 +32,22 @@ local kp =
// * PersistentVolumeClaim (and a corresponding PersistentVolume)
// * the actual volume (per the StorageClassName specified below)
storage: { // https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#storagespec
volumeClaimTemplate: { // https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#persistentvolumeclaim-v1-core (defines variable named 'spec' of type 'PersistentVolumeClaimSpec')
apiVersion: 'v1',
kind: 'PersistentVolumeClaim',
spec: {
accessModes: ['ReadWriteOnce'],
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#resourcerequirements-v1-core (defines 'requests'),
// and https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota (defines 'requests.storage')
resources: { requests: { storage: '100Gi' } },
// A StorageClass of the following name (which can be seen via `kubectl get storageclass` from a node in the given K8s cluster) must exist prior to kube-prometheus being deployed.
storageClassName: 'ssd',
// The following 'selector' is only needed if you're using manual storage provisioning (https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md#manual-storage-provisioning).
// And note that this is not supported/allowed by AWS - uncommenting the following 'selector' line (when deploying kube-prometheus to a K8s cluster in AWS) will cause the pvc to be stuck in the Pending status and have the following error:
// * 'Failed to provision volume with StorageClass "ssd": claim.Spec.Selector is not supported for dynamic provisioning on AWS'
// selector: { matchLabels: {} },
},
},
volumeClaimTemplate: // (same link as above where the 'pvc' variable is defined)
pvc.new() + // http://g.bryan.dev.hepti.center/core/v1/persistentVolumeClaim/#core.v1.persistentVolumeClaim.new
pvc.mixin.spec.withAccessModes('ReadWriteOnce') +
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#resourcerequirements-v1-core (defines 'requests'),
// and https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota (defines 'requests.storage')
pvc.mixin.spec.resources.withRequests({ storage: '100Gi' }) +
// A StorageClass of the following name (which can be seen via `kubectl get storageclass` from a node in the given K8s cluster) must exist prior to kube-prometheus being deployed.
pvc.mixin.spec.withStorageClassName('ssd'),
// The following 'selector' is only needed if you're using manual storage provisioning (https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md#manual-storage-provisioning).
// And note that this is not supported/allowed by AWS - uncommenting the following 'selector' line (when deploying kube-prometheus to a K8s cluster in AWS) will cause the pvc to be stuck in the Pending status and have the following error:
// * 'Failed to provision volume with StorageClass "ssd": claim.Spec.Selector is not supported for dynamic provisioning on AWS'
//pvc.mixin.spec.selector.withMatchLabels({}),
}, // storage
}, // spec
}, // prometheus

View File

@@ -1,33 +0,0 @@
local kp =
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
prometheus+: {
thanos: {
version: '0.19.0',
image: 'quay.io/thanos/thanos:v0.19.0',
objectStorageConfig: {
key: 'thanos.yaml', // How the file inside the secret is called
name: 'thanos-objectstorage', // This is the name of your Kubernetes secret with the config
},
},
},
},
};
{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

View File

@@ -1,20 +1,38 @@
local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
local statefulSet = k.apps.v1beta2.statefulSet;
local toleration = statefulSet.mixin.spec.template.spec.tolerationsType;
{
_config+:: {
tolerations+:: [
{
key: 'key1',
operator: 'Equal',
value: 'value1',
effect: 'NoSchedule',
},
{
key: 'key2',
operator: 'Exists',
},
]
},
local withTolerations() = {
tolerations: [
toleration.new() + (
if std.objectHas(t, 'key') then toleration.withKey(t.key) else toleration) + (
if std.objectHas(t, 'operator') then toleration.withOperator(t.operator) else toleration) + (
if std.objectHas(t, 'value') then toleration.withValue(t.value) else toleration) + (
if std.objectHas(t, 'effect') then toleration.withEffect(t.effect) else toleration),
for t in $._config.tolerations
],
},
prometheus+: {
prometheus+: {
spec+: {
tolerations: [
{
key: 'key1',
operator: 'Equal',
value: 'value1',
effect: 'NoSchedule',
},
{
key: 'key2',
operator: 'Exists',
},
],
},
spec+:
withTolerations(),
},
},
}
}

View File

@@ -1,39 +1,33 @@
local kp = (import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/weave-net/weave-net.libsonnet') + {
values+:: {
common+: {
namespace: 'monitoring',
},
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-weave-net.libsonnet') + {
_config+:: {
namespace: 'monitoring',
},
kubernetesControlPlane+: {
prometheusRuleWeaveNet+: {
spec+: {
groups: std.map(
function(group)
if group.name == 'weave-net' then
group {
rules: std.map(
function(rule)
if rule.alert == 'WeaveNetFastDPFlowsLow' then
rule {
expr: 'sum(weave_flows) < 20000',
}
else if rule.alert == 'WeaveNetIPAMUnreachable' then
rule {
expr: 'weave_ipam_unreachable_percentage > 25',
}
else
rule
,
group.rules
),
}
else
group,
super.groups
),
},
},
prometheusAlerts+:: {
groups: std.map(
function(group)
if group.name == 'weave-net' then
group {
rules: std.map(
function(rule)
if rule.alert == 'WeaveNetFastDPFlowsLow' then
rule {
expr: 'sum(weave_flows) < 20000',
}
else if rule.alert == 'WeaveNetIPAMUnreachable' then
rule {
expr: 'weave_ipam_unreachable_percentage > 25',
}
else
rule
,
group.rules
),
}
else
group,
super.groups
),
},
};

View File

@@ -1,33 +0,0 @@
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/windows.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
windowsScrapeConfig+:: {
static_configs: [{
targets: ['10.240.0.65:5000', '10.240.0.63:5000'],
}],
},
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }

34
go.mod
View File

@@ -1,11 +1,31 @@
module github.com/prometheus-operator/kube-prometheus
module github.com/coreos/kube-prometheus
go 1.15
go 1.13
require (
github.com/Jeffail/gabs v1.4.0
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.8.0
k8s.io/apimachinery v0.19.3
k8s.io/client-go v0.19.3
github.com/Jeffail/gabs v1.2.0
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d // indirect
github.com/brancz/gojsontoyaml v0.0.0-20191212081931-bf2969bbd742
github.com/campoy/embedmd v1.0.0
github.com/google/go-jsonnet v0.16.1-0.20200703153429-aaf50f5b655f
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d // indirect
github.com/imdario/mergo v0.3.7 // indirect
github.com/jsonnet-bundler/jsonnet-bundler v0.4.0
github.com/kr/pretty v0.2.0 // indirect
github.com/mattn/go-colorable v0.1.7 // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.5.1
github.com/spf13/pflag v1.0.3 // indirect
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a // indirect
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a // indirect
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae // indirect
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/api v0.0.0-20190313235455-40a48860b5ab // indirect
k8s.io/apimachinery v0.0.0-20190313205120-d7deff9243b1
k8s.io/client-go v11.0.0+incompatible
k8s.io/klog v0.0.0-20190306015804-8e90cee79f82 // indirect
k8s.io/utils v0.0.0-20190308190857-21c4ce38f2a7 // indirect
sigs.k8s.io/yaml v1.1.0 // indirect
)

552
go.sum
View File

@@ -1,39 +1,6 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630=
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q=
github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g=
github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM=
github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/Jeffail/gabs v1.4.0 h1://5fYRRTq1edjfIrQGvdkcd22pkYUrHZ5YC/H2GJVAo=
github.com/Jeffail/gabs v1.4.0/go.mod h1:6xMvQMK4k33lb7GUUpaAPh6nKMmemQeg5d4gn7/bOXc=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/Jeffail/gabs v1.2.0 h1:uFhoIVTtsX7hV2RxNgWad8gMU+8OJdzFbOathJdhD3o=
github.com/Jeffail/gabs v1.2.0/go.mod h1:6xMvQMK4k33lb7GUUpaAPh6nKMmemQeg5d4gn7/bOXc=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -41,183 +8,52 @@ github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRF
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d h1:UQZhZ2O0vMHr2cI+DC1Mbh0TJxzA3RcLoMsFw+aXw7E=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/brancz/gojsontoyaml v0.0.0-20191212081931-bf2969bbd742 h1:PdvQdwUXiFnSmWsOJcBXLpyH3mJfP2FMPTT3J0i7+8o=
github.com/brancz/gojsontoyaml v0.0.0-20191212081931-bf2969bbd742/go.mod h1:IyUJYN1gvWjtLF5ZuygmxbnsAyP3aJS6cHzIuZY50B0=
github.com/campoy/embedmd v1.0.0 h1:V4kI2qTJJLf4J29RzI/MAt2c3Bl4dQSYPuflzwFH2hY=
github.com/campoy/embedmd v1.0.0/go.mod h1:oxyr9RCiSXg0M3VJ3ks0UGfp98BpSSGr0kpiX3MzVl8=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0 h1:QvGt2nLcHH0WK9orKa+ppBPAxREcH364nPUedEpK0TY=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-jsonnet v0.16.1-0.20200703153429-aaf50f5b655f h1:mw4KoMG5/DXLPhpKXQRYTEIZFkFo0a1HU2R1HbeYpek=
github.com/google/go-jsonnet v0.16.1-0.20200703153429-aaf50f5b655f/go.mod h1:sOcuej3UW1vpPTZOr8L7RQimqai1a57bt5j22LzGZCw=
github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1 h1:DLJCy1n/vrD4HPjOvYcT8aYQXpPIzoRZONaYwyycI+I=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d h1:7XGaL1e6bYS1yIonGp9761ExpPPV1ui0SAC59Yube9k=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/imdario/mergo v0.3.7 h1:Y+UAYTZ7gDEuOfhxKWy+dvb5dRQ6rJjFSdX2HZY1/gI=
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/json-iterator/go v1.1.9 h1:9yzud/Ht36ygwatGx56VwCZtlI/2AD15T1X2sjSuGns=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jsonnet-bundler/jsonnet-bundler v0.4.0 h1:4BKZ6LDqPc2wJDmaKnmYD/vDjUptJtnUpai802MibFc=
github.com/jsonnet-bundler/jsonnet-bundler v0.4.0/go.mod h1:/by7P/OoohkI3q4CgSFqcoFsVY+IaNbzOVDknEsKDeU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
@@ -225,348 +61,106 @@ github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfn
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mattn/go-colorable v0.0.9 h1:UVL0vNpWh04HeJXV0KLcaT7r06gOH2l4OW6ddYRUIY4=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.7 h1:bQGKb3vps/j0E9GfJQ03JyhRuxsvdAanXlT9BTw3mdw=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.6 h1:SrwhHcpV4nWrMGdNcC2kXpMfcBVYGDuTArqyhocJgvA=
github.com/mattn/go-isatty v0.0.6/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.11 h1:FxPOTFNqGkuDUGi3H/qkUbQO4ZiBa2brKq5r0l8TGeM=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzEE/Zbp4w=
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin-contrib/zipkin-go-opentracing v0.4.5/go.mod h1:/wsWhb9smxSfWAKL3wpBW7V8scJMt8N8gnaMCS9E/cA=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.8.0 h1:zvJNkoCFAnYFNC24FV8nW4JdRJ3GIFcLbg65lL/JDcw=
github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM=
github.com/prometheus/client_golang v1.5.1 h1:bdHYieyGlH+6OLEk2YQha8THib30KP0/yD0YH9m6xcA=
github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.14.0 h1:RHRyE8UocrbjU+6UvRzwi6HjiDfxrrBU91TtbKzkGp4=
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a h1:Igim7XhdOpBnWPuYJ70XcNpq8q3BCACtVgNfoJxOV7g=
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6 h1:pE8b58s1HRDMi8RDc79m0HISf9D4TzseP40cEA6IGfs=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a h1:tImsplftrFpALCYumobsd0K86vlAs/eXGFms2txfJfA=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190310054646-10058d7d4faa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e h1:nFYrTHrdrAOpShe27kaFHjsqYSEQ0KWqdWLu3xuZJts=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037 h1:YyJpGZS1sBuBCzLAR1VEpK193GlqGZbnPFnPV/5Rsb4=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82 h1:ywK/j/KkyTHcdyYSZNXGjMwgmDSfjglYZ3vStQ/gSCU=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae h1:Ih9Yo4hSPImZOpfGuA4bR/ORKTAbhZo2AbWNRCnevdo=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211 h1:9UQO31fZ+0aKQOFldThf7BKPMJTiBfWycGh/u3UoO88=
golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db h1:6/JqlYfC1CCaLnGceQTI+sDGhC9UBSPAsBqI0Gun6kU=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 h1:/5xXl8Y5W96D+TtHSlonuFqGHIWVuyCkGJLwGh9JJFs=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEGA=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.1.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -574,33 +168,15 @@ gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
k8s.io/api v0.19.3 h1:GN6ntFnv44Vptj/b+OnMW7FmzkpDoIDLZRvKX3XH9aU=
k8s.io/api v0.19.3/go.mod h1:VF+5FT1B74Pw3KxMdKyinLo+zynBaMBiAfGMuldcNDs=
k8s.io/apimachinery v0.19.3 h1:bpIQXlKjB4cB/oNpnNnV+BybGPR7iP5oYpsOTEJ4hgc=
k8s.io/apimachinery v0.19.3/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
k8s.io/client-go v0.19.3 h1:ctqR1nQ52NUs6LpI0w+a5U+xjYwflFwA13OJKcicMxg=
k8s.io/client-go v0.19.3/go.mod h1:+eEMktZM+MG0KO+PTkci8xnbCZHvj9TqR6Q1XDUIJOM=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0 h1:XRvcwJozkgZ1UQJmfMGpvRthQHOvihEhYtDfAaxMz/A=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o=
k8s.io/utils v0.0.0-20200729134348-d5654de09c73 h1:uJmqzgNWG7XyClnU/mLPBWwfKKF1K8Hf8whTseBgJcg=
k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
sigs.k8s.io/structured-merge-diff/v4 v4.0.1 h1:YXTMot5Qz/X1iBRJhAt+vI+HVttY0WkSqqhKxQ0xVbA=
sigs.k8s.io/structured-merge-diff/v4 v4.0.1/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
k8s.io/api v0.0.0-20190313235455-40a48860b5ab h1:DG9A67baNpoeweOy2spF1OWHhnVY5KR7/Ek/+U1lVZc=
k8s.io/api v0.0.0-20190313235455-40a48860b5ab/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/apimachinery v0.0.0-20190313205120-d7deff9243b1 h1:IS7K02iBkQXpCeieSiyJjGoLSdVOv2DbPaWHJ+ZtgKg=
k8s.io/apimachinery v0.0.0-20190313205120-d7deff9243b1/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/client-go v11.0.0+incompatible h1:LBbX2+lOwY9flffWlJM7f1Ct8V2SRNiMRDFeiwnJo9o=
k8s.io/client-go v11.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
k8s.io/klog v0.0.0-20190306015804-8e90cee79f82 h1:SHucoAy7lRb+w5oC/hbXyZg+zX+Wftn6hD4tGzHCVqA=
k8s.io/klog v0.0.0-20190306015804-8e90cee79f82/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/utils v0.0.0-20190308190857-21c4ce38f2a7 h1:8r+l4bNWjRlsFYlQJnKJ2p7s1YQPj4XyXiJVqDHRx7c=
k8s.io/utils v0.0.0-20190308190857-21c4ce38f2a7/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=

View File

@@ -1,11 +0,0 @@
{
prometheus+:: {
clusterRole+: {
rules+: [{
apiGroups: [''],
resources: ['services', 'endpoints', 'pods'],
verbs: ['get', 'list', 'watch'],
}],
},
},
}

View File

@@ -1,99 +0,0 @@
{
values+:: {
alertmanager+: {
podAntiAffinity: 'soft',
podAntiAffinityTopologyKey: 'kubernetes.io/hostname',
},
prometheus+: {
podAntiAffinity: 'soft',
podAntiAffinityTopologyKey: 'kubernetes.io/hostname',
},
blackboxExporter+: {
podAntiAffinity: 'soft',
podAntiAffinityTopologyKey: 'kubernetes.io/hostname',
},
prometheusAdapter+: {
podAntiAffinity: 'soft',
podAntiAffinityTopologyKey: 'kubernetes.io/hostname',
},
},
local antiaffinity(labelSelector, namespace, type, topologyKey) = {
local podAffinityTerm = {
namespaces: [namespace],
topologyKey: topologyKey,
labelSelector: {
matchLabels: labelSelector,
},
},
affinity: {
podAntiAffinity: if type == 'soft' then {
preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 100,
podAffinityTerm: podAffinityTerm,
}],
} else if type == 'hard' then {
requiredDuringSchedulingIgnoredDuringExecution: [
podAffinityTerm,
],
} else error 'podAntiAffinity must be either "soft" or "hard"',
},
},
alertmanager+: {
alertmanager+: {
spec+:
antiaffinity(
$.alertmanager._config.selectorLabels,
$.values.common.namespace,
$.values.alertmanager.podAntiAffinity,
$.values.alertmanager.podAntiAffinityTopologyKey,
),
},
},
prometheus+: {
prometheus+: {
spec+:
antiaffinity(
$.prometheus._config.selectorLabels,
$.values.common.namespace,
$.values.prometheus.podAntiAffinity,
$.values.prometheus.podAntiAffinityTopologyKey,
),
},
},
blackboxExporter+: {
deployment+: {
spec+: {
template+: {
spec+:
antiaffinity(
$.blackboxExporter._config.selectorLabels,
$.values.common.namespace,
$.values.blackboxExporter.podAntiAffinity,
$.values.blackboxExporter.podAntiAffinityTopologyKey,
),
},
},
},
},
prometheusAdapter+: {
deployment+: {
spec+: {
template+: {
spec+:
antiaffinity(
$.prometheusAdapter._config.selectorLabels,
$.values.common.namespace,
$.values.prometheusAdapter.podAntiAffinity,
$.values.prometheusAdapter.podAntiAffinityTopologyKey,
),
},
},
},
},
}

View File

@@ -1,165 +0,0 @@
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}',
seriesFilters: [],
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { matches: '^container_(.*)_seconds_total$', as: '' },
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[1m])) by (<<.GroupBy>>)',
},
{
seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}',
seriesFilters: [
{ isNot: '^container_.*_seconds_total$' },
],
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { matches: '^container_(.*)_total$', as: '' },
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[1m])) by (<<.GroupBy>>)',
},
{
seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}',
seriesFilters: [
{ isNot: '^container_.*_total$' },
],
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { matches: '^container_(.*)$', as: '' },
metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)',
},
{
seriesQuery: '{namespace!="",__name__!~"^container_.*"}',
seriesFilters: [
{ isNot: '.*_total$' },
],
resources: { template: '<<.Resource>>' },
name: { matches: '', as: '' },
metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)',
},
{
seriesQuery: '{namespace!="",__name__!~"^container_.*"}',
seriesFilters: [
{ isNot: '.*_seconds_total' },
],
resources: { template: '<<.Resource>>' },
name: { matches: '^(.*)_total$', as: '' },
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)',
},
{
seriesQuery: '{namespace!="",__name__!~"^container_.*"}',
seriesFilters: [],
resources: { template: '<<.Resource>>' },
name: { matches: '^(.*)_seconds_total$', as: '' },
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)',
},
],
},
},
},
prometheusAdapter+: {
customMetricsApiService: {
apiVersion: 'apiregistration.k8s.io/v1',
kind: 'APIService',
metadata: {
name: 'v1beta1.custom.metrics.k8s.io',
},
spec: {
service: {
name: $.prometheusAdapter.service.metadata.name,
namespace: $.values.prometheusAdapter.namespace,
},
group: 'custom.metrics.k8s.io',
version: 'v1beta1',
insecureSkipTLSVerify: true,
groupPriorityMinimum: 100,
versionPriority: 100,
},
},
customMetricsApiServiceV1Beta2: {
apiVersion: 'apiregistration.k8s.io/v1',
kind: 'APIService',
metadata: {
name: 'v1beta2.custom.metrics.k8s.io',
},
spec: {
service: {
name: $.prometheusAdapter.service.metadata.name,
namespace: $.values.prometheusAdapter.namespace,
},
group: 'custom.metrics.k8s.io',
version: 'v1beta2',
insecureSkipTLSVerify: true,
groupPriorityMinimum: 100,
versionPriority: 200,
},
},
customMetricsClusterRoleServerResources: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'custom-metrics-server-resources',
},
rules: [{
apiGroups: ['custom.metrics.k8s.io'],
resources: ['*'],
verbs: ['*'],
}],
},
customMetricsClusterRoleBindingServerResources: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'custom-metrics-server-resources',
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'custom-metrics-server-resources',
},
subjects: [{
kind: 'ServiceAccount',
name: $.prometheusAdapter.serviceAccount.metadata.name,
namespace: $.values.prometheusAdapter.namespace,
}],
},
customMetricsClusterRoleBindingHPA: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'hpa-controller-custom-metrics',
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'custom-metrics-server-resources',
},
subjects: [{
kind: 'ServiceAccount',
name: 'horizontal-pod-autoscaler',
namespace: 'kube-system',
}],
},
},
}

View File

@@ -1,95 +0,0 @@
// External metrics API allows the HPA v2 to scale based on metrics coming from outside of Kubernetes cluster
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for external-metrics
config+:: {
externalRules+: [
// {
// seriesQuery: '{__name__=~"^.*_queue$",namespace!=""}',
// seriesFilters: [],
// resources: {
// overrides: {
// namespace: { resource: 'namespace' }
// },
// },
// name: { matches: '^.*_queue$', as: '$0' },
// metricsQuery: 'max(<<.Series>>{<<.LabelMatchers>>})',
// },
],
},
},
},
prometheusAdapter+: {
externalMetricsApiService: {
apiVersion: 'apiregistration.k8s.io/v1',
kind: 'APIService',
metadata: {
name: 'v1beta1.external.metrics.k8s.io',
},
spec: {
service: {
name: $.prometheusAdapter.service.metadata.name,
namespace: $.values.prometheusAdapter.namespace,
},
group: 'external.metrics.k8s.io',
version: 'v1beta1',
insecureSkipTLSVerify: true,
groupPriorityMinimum: 100,
versionPriority: 100,
},
},
externalMetricsClusterRoleServerResources: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'external-metrics-server-resources',
},
rules: [{
apiGroups: ['external.metrics.k8s.io'],
resources: ['*'],
verbs: ['*'],
}],
},
externalMetricsClusterRoleBindingServerResources: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'external-metrics-server-resources',
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'external-metrics-server-resources',
},
subjects: [{
kind: 'ServiceAccount',
name: $.prometheusAdapter.serviceAccount.metadata.name,
namespace: $.values.prometheusAdapter.namespace,
}],
},
externalMetricsClusterRoleBindingHPA: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'hpa-controller-external-metrics',
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'external-metrics-server-resources',
},
subjects: [{
kind: 'ServiceAccount',
name: 'horizontal-pod-autoscaler',
namespace: 'kube-system',
}],
},
},
}

View File

@@ -1,40 +0,0 @@
{
prometheus+: {
serviceMonitorKubelet+:
{
spec+: {
endpoints: [
{
port: 'http-metrics',
scheme: 'http',
interval: '30s',
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [
{ sourceLabels: ['__metrics_path__'], targetLabel: 'metrics_path' },
],
},
{
port: 'http-metrics',
scheme: 'http',
path: '/metrics/cadvisor',
interval: '30s',
honorLabels: true,
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [
{ sourceLabels: ['__metrics_path__'], targetLabel: 'metrics_path' },
],
metricRelabelings: [
// Drop a bunch of metrics which are disabled but still sent, see
// https://github.com/google/cadvisor/issues/1925.
{
sourceLabels: ['__name__'],
regex: 'container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)',
action: 'drop',
},
],
},
],
},
},
},
}

View File

@@ -1,136 +0,0 @@
{
values+:: {
clusterVerticalAutoscaler: {
version: '0.8.1',
image: 'gcr.io/google_containers/cpvpa-amd64:v0.8.1',
baseCPU: '1m',
stepCPU: '1m',
baseMemory: '1Mi',
stepMemory: '2Mi',
},
},
ksmAutoscaler+: {
clusterRole: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: { name: 'ksm-autoscaler' },
rules: [{
apiGroups: [''],
resources: ['nodes'],
verbs: ['list', 'watch'],
}],
},
clusterRoleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: { name: 'ksm-autoscaler' },
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'ksm-autoscaler',
},
subjects: [{ kind: 'ServiceAccount', name: 'ksm-autoscaler', namespace: $.values.common.namespace }],
},
roleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'ksm-autoscaler',
namespace: $.values.common.namespace,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'ksm-autoscaler',
},
subjects: [{ kind: 'ServiceAccount', name: 'ksm-autoscaler' }],
},
role: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'Role',
metadata: {
name: 'ksm-autoscaler',
namespace: $.values.common.namespace,
},
rules: [
{
apiGroups: ['extensions'],
resources: ['deployments'],
verbs: ['patch'],
resourceNames: ['kube-state-metrics'],
},
{
apiGroups: ['apps'],
resources: ['deployments'],
verbs: ['patch'],
resourceNames: ['kube-state-metrics'],
},
],
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: 'ksm-autoscaler',
namespace: $.values.common.namespace,
},
},
deployment:
local podLabels = { app: 'ksm-autoscaler' };
local c = {
name: 'ksm-autoscaler',
image: $.values.clusterVerticalAutoscaler.image,
args: [
'/cpvpa',
'--target=deployment/kube-state-metrics',
'--namespace=' + $.values.common.namespace,
'--logtostderr=true',
'--poll-period-seconds=10',
'--default-config={"kube-state-metrics":{"requests":{"cpu":{"base":"' + $.values.clusterVerticalAutoscaler.baseCPU +
'","step":"' + $.values.clusterVerticalAutoscaler.stepCPU +
'","nodesPerStep":1},"memory":{"base":"' + $.values.clusterVerticalAutoscaler.baseMemory +
'","step":"' + $.values.clusterVerticalAutoscaler.stepMemory +
'","nodesPerStep":1}},"limits":{"cpu":{"base":"' + $.values.clusterVerticalAutoscaler.baseCPU +
'","step":"' + $.values.clusterVerticalAutoscaler.stepCPU +
'","nodesPerStep":1},"memory":{"base":"' + $.values.clusterVerticalAutoscaler.baseMemory +
'","step":"' + $.values.clusterVerticalAutoscaler.stepMemory + '","nodesPerStep":1}}}}',
],
resources: {
requests: { cpu: '20m', memory: '10Mi' },
},
};
{
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name: 'ksm-autoscaler',
namespace: $.values.common.namespace,
labels: podLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: podLabels },
template: {
metadata: {
labels: podLabels,
},
spec: {
containers: [c],
serviceAccount: 'ksm-autoscaler',
nodeSelector: { 'kubernetes.io/os': 'linux' },
securityContext: {
runAsNonRoot: true,
runAsUser: 65534,
},
},
},
},
},
},
}

View File

@@ -1,39 +0,0 @@
local addArgs(args, name, containers) = std.map(
function(c) if c.name == name then
c {
args+: args,
}
else c,
containers,
);
{
kubeStateMetrics+: {
deployment+: {
spec+: {
template+: {
spec+: {
containers: addArgs(
[|||
--metric-denylist=
kube_*_created,
kube_*_metadata_resource_version,
kube_replicaset_metadata_generation,
kube_replicaset_status_observed_generation,
kube_pod_restart_policy,
kube_pod_init_container_status_terminated,
kube_pod_init_container_status_running,
kube_pod_container_status_terminated,
kube_pod_container_status_running,
kube_pod_completion_time,
kube_pod_status_scheduled
|||],
'kube-state-metrics',
super.containers
),
},
},
},
},
},
}

View File

@@ -1,20 +0,0 @@
// On managed Kubernetes clusters some of the control plane components are not exposed to customers.
// Disable scrape jobs, service monitors, and alert groups for these components by overwriting 'main.libsonnet' defaults
{
kubernetesControlPlane+: {
serviceMonitorKubeControllerManager:: null,
serviceMonitorKubeScheduler:: null,
} + {
prometheusRule+: {
spec+: {
local g = super.groups,
groups: [
h
for h in g
if !std.setMember(h.name, ['kubernetes-system-controller-manager', 'kubernetes-system-scheduler'])
],
},
},
},
}

View File

@@ -1,18 +0,0 @@
local patch(ports) = {
spec+: {
ports: ports,
type: 'NodePort',
},
};
{
prometheus+: {
service+: patch([{ name: 'web', port: 9090, targetPort: 'web', nodePort: 30900 }]),
},
alertmanager+: {
service+: patch([{ name: 'web', port: 9093, targetPort: 'web', nodePort: 30903 }]),
},
grafana+: {
service+: patch([{ name: 'http', port: 3000, targetPort: 'http', nodePort: 30902 }]),
},
}

View File

@@ -1,260 +0,0 @@
local restrictedPodSecurityPolicy = {
apiVersion: 'policy/v1beta1',
kind: 'PodSecurityPolicy',
metadata: {
name: 'kube-prometheus-restricted',
},
spec: {
privileged: false,
// Required to prevent escalations to root.
allowPrivilegeEscalation: false,
// This is redundant with non-root + disallow privilege escalation,
// but we can provide it for defense in depth.
requiredDropCapabilities: ['ALL'],
// Allow core volume types.
volumes: [
'configMap',
'emptyDir',
'secret',
// Assume that persistentVolumes set up by the cluster admin are safe to use.
'persistentVolumeClaim',
],
hostNetwork: false,
hostIPC: false,
hostPID: false,
runAsUser: {
// Require the container to run without root privileges.
rule: 'MustRunAsNonRoot',
},
seLinux: {
// This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny',
},
supplementalGroups: {
rule: 'MustRunAs',
ranges: [{
// Forbid adding the root group.
min: 1,
max: 65535,
}],
},
fsGroup: {
rule: 'MustRunAs',
ranges: [{
// Forbid adding the root group.
min: 1,
max: 65535,
}],
},
readOnlyRootFilesystem: false,
},
};
{
restrictedPodSecurityPolicy: restrictedPodSecurityPolicy,
alertmanager+: {
role: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'Role',
metadata: {
name: 'alertmanager-' + $.values.alertmanager.name,
namespace: $.values.common.namespace,
},
rules: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: [restrictedPodSecurityPolicy.metadata.name],
}],
},
roleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'alertmanager-' + $.values.alertmanager.name,
namespace: $.values.common.namespace,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'alertmanager-' + $.values.alertmanager.name,
},
subjects: [{
kind: 'ServiceAccount',
name: 'alertmanager-' + $.values.alertmanager.name,
namespace: $.values.alertmanager.namespace,
}],
},
},
blackboxExporter+: {
clusterRole+: {
rules+: [
{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: ['blackbox-exporter-psp'],
},
],
},
podSecurityPolicy:
local blackboxExporterPspPrivileged =
if $.blackboxExporter._config.privileged then
{
metadata+: {
name: 'blackbox-exporter-psp',
},
spec+: {
privileged: true,
allowedCapabilities: ['NET_RAW'],
runAsUser: {
rule: 'RunAsAny',
},
},
}
else
{};
restrictedPodSecurityPolicy + blackboxExporterPspPrivileged,
},
grafana+: {
role: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'Role',
metadata: {
name: 'grafana',
namespace: $.values.common.namespace,
},
rules: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: [restrictedPodSecurityPolicy.metadata.name],
}],
},
roleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'grafana',
namespace: $.values.common.namespace,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'grafana',
},
subjects: [{
kind: 'ServiceAccount',
name: $.grafana.serviceAccount.metadata.name,
namespace: $.grafana.serviceAccount.metadata.namespace,
}],
},
},
kubeStateMetrics+: {
clusterRole+: {
rules+: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: ['kube-state-metrics-psp'],
}],
},
podSecurityPolicy: restrictedPodSecurityPolicy {
metadata+: {
name: 'kube-state-metrics-psp',
},
spec+: {
runAsUser: {
rule: 'RunAsAny',
},
},
},
},
nodeExporter+: {
clusterRole+: {
rules+: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: ['node-exporter-psp'],
}],
},
podSecurityPolicy: restrictedPodSecurityPolicy {
metadata+: {
name: 'node-exporter-psp',
},
spec+: {
allowedHostPaths+: [
{
pathPrefix: '/proc',
readOnly: true,
},
{
pathPrefix: '/sys',
readOnly: true,
},
{
pathPrefix: '/',
readOnly: true,
},
],
hostNetwork: true,
hostPID: true,
hostPorts: [
{
max: $.nodeExporter._config.port,
min: $.nodeExporter._config.port,
},
],
readOnlyRootFilesystem: true,
volumes+: [
'hostPath',
],
},
},
},
prometheusAdapter+: {
clusterRole+: {
rules+: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: [restrictedPodSecurityPolicy.metadata.name],
}],
},
},
prometheusOperator+: {
clusterRole+: {
rules+: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: [restrictedPodSecurityPolicy.metadata.name],
}],
},
},
prometheus+: {
clusterRole+: {
rules+: [{
apiGroups: ['policy'],
resources: ['podsecuritypolicies'],
verbs: ['use'],
resourceNames: [restrictedPodSecurityPolicy.metadata.name],
}],
},
},
}

View File

@@ -1,102 +0,0 @@
(import 'github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet') + {
values+:: {
etcd: {
ips: [],
clientCA: null,
clientKey: null,
clientCert: null,
serverName: null,
insecureSkipVerify: null,
},
},
prometheus+: {
serviceEtcd: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: 'etcd',
namespace: 'kube-system',
labels: { 'app.kubernetes.io/name': 'etcd' },
},
spec: {
ports: [
{ name: 'metrics', targetPort: 2379, port: 2379 },
],
clusterIP: 'None',
},
},
endpointsEtcd: {
apiVersion: 'v1',
kind: 'Endpoints',
metadata: {
name: 'etcd',
namespace: 'kube-system',
labels: { 'app.kubernetes.io/name': 'etcd' },
},
subsets: [{
addresses: [
{ ip: etcdIP }
for etcdIP in $.values.etcd.ips
],
ports: [
{ name: 'metrics', port: 2379, protocol: 'TCP' },
],
}],
},
serviceMonitorEtcd: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'etcd',
namespace: 'kube-system',
labels: {
'app.kubernetes.io/name': 'etcd',
},
},
spec: {
jobLabel: 'app.kubernetes.io/name',
endpoints: [
{
port: 'metrics',
interval: '30s',
scheme: 'https',
// Prometheus Operator (and Prometheus) allow us to specify a tlsConfig. This is required as most likely your etcd metrics end points is secure.
tlsConfig: {
caFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client-ca.crt',
keyFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.key',
certFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.crt',
[if $.values.etcd.serverName != null then 'serverName']: $.values.etcd.serverName,
[if $.values.etcd.insecureSkipVerify != null then 'insecureSkipVerify']: $.values.etcd.insecureSkipVerify,
},
},
],
selector: {
matchLabels: {
'app.kubernetes.io/name': 'etcd',
},
},
},
},
secretEtcdCerts: {
// Prometheus Operator allows us to mount secrets in the pod. By loading the secrets as files, they can be made available inside the Prometheus pod.
apiVersion: 'v1',
kind: 'Secret',
type: 'Opaque',
metadata: {
name: 'kube-etcd-client-certs',
namespace: $.values.common.namespace,
},
data: {
'etcd-client-ca.crt': std.base64($.values.etcd.clientCA),
'etcd-client.key': std.base64($.values.etcd.clientKey),
'etcd-client.crt': std.base64($.values.etcd.clientCert),
},
},
prometheus+: {
// Reference info: https://coreos.com/operators/prometheus/docs/latest/api.html#prometheusspec
spec+: {
secrets+: [$.prometheus.secretEtcdCerts.metadata.name],
},
},
},
}

View File

@@ -1,48 +0,0 @@
// Strips spec.containers[].limits for certain containers
// https://github.com/prometheus-operator/kube-prometheus/issues/72
{
local noLimit(c) =
//if std.objectHas(c, 'resources') && c.name != 'kube-state-metrics'
if c.name != 'kube-state-metrics'
then c { resources+: { limits: {} } }
else c,
nodeExporter+: {
daemonset+: {
spec+: {
template+: {
spec+: {
containers: std.map(noLimit, super.containers),
},
},
},
},
},
kubeStateMetrics+: {
deployment+: {
spec+: {
template+: {
spec+: {
containers: std.map(noLimit, super.containers),
},
},
},
},
},
prometheusOperator+: {
deployment+: {
spec+: {
template+: {
spec+: {
local addArgs(c) =
if c.name == 'prometheus-operator'
then c { args+: ['--config-reloader-cpu=0'] }
else c,
containers: std.map(addArgs, super.containers),
},
},
},
},
},
}

View File

@@ -1,134 +0,0 @@
[
{
alert: 'WeaveNetIPAMSplitBrain',
expr: 'max(weave_ipam_unreachable_percentage) - min(weave_ipam_unreachable_percentage) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'Percentage of all IP addresses owned by unreachable peers is not same for every node.',
description: 'actionable: Weave Net network has a split brain problem. Please find the problem and fix it.',
},
},
{
alert: 'WeaveNetIPAMUnreachable',
expr: 'weave_ipam_unreachable_percentage > 25',
'for': '10m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'Percentage of all IP addresses owned by unreachable peers is above threshold.',
description: 'actionable: Please find the problem and fix it.',
},
},
{
alert: 'WeaveNetIPAMPendingAllocates',
expr: 'sum(weave_ipam_pending_allocates) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'Number of pending allocates is above the threshold.',
description: 'actionable: Please find the problem and fix it.',
},
},
{
alert: 'WeaveNetIPAMPendingClaims',
expr: 'sum(weave_ipam_pending_claims) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'Number of pending claims is above the threshold.',
description: 'actionable: Please find the problem and fix it.',
},
},
{
alert: 'WeaveNetFastDPFlowsLow',
expr: 'sum(weave_flows) < 15000',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'Number of FastDP flows is below the threshold.',
description: 'actionable: Please find the reason for FastDP flows to go below the threshold and fix it.',
},
},
{
alert: 'WeaveNetFastDPFlowsOff',
expr: 'sum(weave_flows == bool 0) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'FastDP flows is zero.',
description: 'actionable: Please find the reason for FastDP flows to be off and fix it.',
},
},
{
alert: 'WeaveNetHighConnectionTerminationRate',
expr: 'rate(weave_connection_terminations_total[5m]) > 0.1',
'for': '5m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'A lot of connections are getting terminated.',
description: 'actionable: Please find the reason for the high connection termination rate and fix it.',
},
},
{
alert: 'WeaveNetConnectionsConnecting',
expr: 'sum(weave_connections{state="connecting"}) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'A lot of connections are in connecting state.',
description: 'actionable: Please find the reason for this and fix it.',
},
},
{
alert: 'WeaveNetConnectionsRetying',
expr: 'sum(weave_connections{state="retrying"}) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'A lot of connections are in retrying state.',
description: 'actionable: Please find the reason for this and fix it.',
},
},
{
alert: 'WeaveNetConnectionsPending',
expr: 'sum(weave_connections{state="pending"}) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'A lot of connections are in pending state.',
description: 'actionable: Please find the reason for this and fix it.',
},
},
{
alert: 'WeaveNetConnectionsFailed',
expr: 'sum(weave_connections{state="failed"}) > 0',
'for': '3m',
labels: {
severity: 'critical',
},
annotations: {
summary: 'A lot of connections are in failed state.',
description: 'actionable: Please find the reason and fix it.',
},
},
]

View File

@@ -1,73 +0,0 @@
{
prometheus+: {
local p = self,
serviceWeaveNet: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: 'weave-net',
namespace: 'kube-system',
labels: { 'app.kubernetes.io/name': 'weave-net' },
},
spec: {
ports: [
{ name: 'weave-net-metrics', targetPort: 6782, port: 6782 },
],
selector: { name: 'weave-net' },
clusterIP: 'None',
},
},
serviceMonitorWeaveNet: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'weave-net',
labels: {
'app.kubernetes.io/name': 'weave-net',
},
namespace: 'monitoring',
},
spec: {
jobLabel: 'app.kubernetes.io/name',
endpoints: [
{
port: 'weave-net-metrics',
path: '/metrics',
interval: '15s',
},
],
namespaceSelector: {
matchNames: [
'kube-system',
],
},
selector: {
matchLabels: {
'app.kubernetes.io/name': 'weave-net',
},
},
},
},
prometheusRuleWeaveNet: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: p._config.mixin.ruleLabels,
name: 'weave-net-rules',
namespace: p._config.namespace,
},
spec: {
groups: [{
name: 'weave-net',
rules: (import './alerts.libsonnet'),
}],
},
},
mixin+:: {
grafanaDashboards+:: {
'weave-net.json': (import './grafana-weave-net.json'),
'weave-net-cluster.json': (import './grafana-weave-net-cluster.json'),
},
},
},
}

View File

@@ -1,70 +0,0 @@
local windowsdashboards = import 'kubernetes-mixin/dashboards/windows.libsonnet';
local windowsrules = import 'kubernetes-mixin/rules/windows.libsonnet';
{
values+:: {
// This needs to follow prometheus naming convention and not prometheus-operator one
windowsScrapeConfig+:: {
job_name: 'windows-exporter',
static_configs: [
{
targets: [error 'must provide targets array'],
},
],
relabel_configs: [
{
action: 'replace',
regex: '(.*)',
replacement: '$1',
source_labels: [
'__meta_kubernetes_endpoint_address_target_name',
],
target_label: 'instance',
},
],
},
grafana+:: {
dashboards+:: windowsdashboards {
_config: $.kubernetesControlPlane.mixin._config {
wmiExporterSelector: 'job="' + $.values.windowsScrapeConfig.job_name + '"',
},
}.grafanaDashboards,
},
},
kubernetesControlPlane+: {
mixin+:: {
prometheusRules+:: {
groups+: windowsrules {
_config: $.kubernetesControlPlane.mixin._config {
wmiExporterSelector: 'job="' + $.values.windowsScrapeConfig.job_name + '"',
},
}.prometheusRules.groups,
},
},
},
prometheus+: {
local p = self,
local sc = [$.values.windowsScrapeConfig],
prometheus+: {
spec+: {
additionalScrapeConfigs: {
name: 'prometheus-' + p._config.name + '-additional-scrape-config',
key: 'prometheus-additional.yaml',
},
},
},
windowsConfig: {
apiVersion: 'v1',
kind: 'Secret',
metadata: {
name: 'prometheus-' + p._config.name + '-additional-scrape-config',
namespace: p._config.namespace,
},
stringData: {
'prometheus-additional.yaml': std.manifestYamlDoc(sc),
},
},
},
}

View File

@@ -0,0 +1,155 @@
local k = import 'ksonnet/ksonnet.beta.4/k.libsonnet';
{
_config+:: {
namespace: 'default',
versions+:: {
alertmanager: 'v0.21.0',
},
imageRepos+:: {
alertmanager: 'quay.io/prometheus/alertmanager',
},
alertmanager+:: {
name: 'main',
config: {
global: {
resolve_timeout: '5m',
},
inhibit_rules: [{
source_match: {
severity: 'critical',
},
target_match_re: {
severity: 'warning|info',
},
equal: ['namespace', 'alertname'],
}, {
source_match: {
severity: 'warning',
},
target_match_re: {
severity: 'info',
},
equal: ['namespace', 'alertname'],
}],
route: {
group_by: ['namespace'],
group_wait: '30s',
group_interval: '5m',
repeat_interval: '12h',
receiver: 'Default',
routes: [
{
receiver: 'Watchdog',
match: {
alertname: 'Watchdog',
},
},
{
receiver: 'Critical',
match: {
severity: 'critical',
},
},
],
},
receivers: [
{
name: 'Default',
},
{
name: 'Watchdog',
},
{
name: 'Critical',
},
],
},
replicas: 3,
},
},
alertmanager+:: {
secret:
local secret = k.core.v1.secret;
if std.type($._config.alertmanager.config) == 'object' then
secret.new('alertmanager-' + $._config.alertmanager.name, {})
.withStringData({ 'alertmanager.yaml': std.manifestYamlDoc($._config.alertmanager.config) }) +
secret.mixin.metadata.withNamespace($._config.namespace)
else
secret.new('alertmanager-' + $._config.alertmanager.name, { 'alertmanager.yaml': std.base64($._config.alertmanager.config) }) +
secret.mixin.metadata.withNamespace($._config.namespace),
serviceAccount:
local serviceAccount = k.core.v1.serviceAccount;
serviceAccount.new('alertmanager-' + $._config.alertmanager.name) +
serviceAccount.mixin.metadata.withNamespace($._config.namespace),
service:
local service = k.core.v1.service;
local servicePort = k.core.v1.service.mixin.spec.portsType;
local alertmanagerPort = servicePort.newNamed('web', 9093, 'web');
service.new('alertmanager-' + $._config.alertmanager.name, { app: 'alertmanager', alertmanager: $._config.alertmanager.name }, alertmanagerPort) +
service.mixin.spec.withSessionAffinity('ClientIP') +
service.mixin.metadata.withNamespace($._config.namespace) +
service.mixin.metadata.withLabels({ alertmanager: $._config.alertmanager.name }),
serviceMonitor:
{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'alertmanager',
namespace: $._config.namespace,
labels: {
'k8s-app': 'alertmanager',
},
},
spec: {
selector: {
matchLabels: {
alertmanager: $._config.alertmanager.name,
},
},
endpoints: [
{
port: 'web',
interval: '30s',
},
],
},
},
alertmanager:
{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'Alertmanager',
metadata: {
name: $._config.alertmanager.name,
namespace: $._config.namespace,
labels: {
alertmanager: $._config.alertmanager.name,
},
},
spec: {
replicas: $._config.alertmanager.replicas,
version: $._config.versions.alertmanager,
image: $._config.imageRepos.alertmanager + ':' + $._config.versions.alertmanager,
nodeSelector: { 'kubernetes.io/os': 'linux' },
serviceAccountName: 'alertmanager-' + $._config.alertmanager.name,
securityContext: {
runAsUser: 1000,
runAsNonRoot: true,
fsGroup: 2000,
},
},
},
},
}

View File

@@ -0,0 +1,57 @@
{
prometheusAlerts+:: {
groups+: [
{
name: 'alertmanager.rules',
rules: [
{
alert: 'AlertmanagerConfigInconsistent',
annotations: {
message: |||
The configuration of the instances of the Alertmanager cluster `{{ $labels.namespace }}/{{ $labels.service }}` are out of sync.
{{ range printf "alertmanager_config_hash{namespace=\"%s\",service=\"%s\"}" $labels.namespace $labels.service | query }}
Configuration hash for pod {{ .Labels.pod }} is "{{ printf "%.f" .Value }}"
{{ end }}
|||,
},
expr: |||
count by(namespace,service) (count_values by(namespace,service) ("config_hash", alertmanager_config_hash{%(alertmanagerSelector)s})) != 1
||| % $._config,
'for': '5m',
labels: {
severity: 'critical',
},
},
{
alert: 'AlertmanagerFailedReload',
annotations: {
message: "Reloading Alertmanager's configuration has failed for {{ $labels.namespace }}/{{ $labels.pod}}.",
},
expr: |||
alertmanager_config_last_reload_successful{%(alertmanagerSelector)s} == 0
||| % $._config,
'for': '10m',
labels: {
severity: 'warning',
},
},
{
alert: 'AlertmanagerMembersInconsistent',
annotations: {
message: 'Alertmanager has not found all other members of the cluster.',
},
expr: |||
alertmanager_cluster_members{%(alertmanagerSelector)s}
!= on (service) GROUP_LEFT()
count by (service) (alertmanager_cluster_members{%(alertmanagerSelector)s})
||| % $._config,
'for': '5m',
labels: {
severity: 'critical',
},
},
],
},
],
},
}

View File

@@ -0,0 +1,4 @@
(import 'alertmanager.libsonnet') +
(import 'general.libsonnet') +
(import 'node.libsonnet') +
(import 'prometheus-operator.libsonnet')

View File

@@ -7,8 +7,7 @@
{
alert: 'TargetDown',
annotations: {
summary: 'One or more targets are unreachable.',
description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.',
message: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.',
},
expr: '100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10',
'for': '10m',
@@ -19,8 +18,7 @@
{
alert: 'Watchdog',
annotations: {
summary: 'An alert that should always be firing to certify that Alertmanager is working properly.',
description: |||
message: |||
This is an alert meant to ensure that the entire alerting pipeline is functional.
This alert is always firing, therefore it should always be firing in Alertmanager
and always fire against a receiver. There are integrations with various notification

View File

@@ -7,7 +7,7 @@
{
alert: 'NodeNetworkInterfaceFlapping',
annotations: {
message: 'Network interface "{{ $labels.device }}" changing it\'s up status often on node-exporter {{ $labels.namespace }}/{{ $labels.pod }}',
message: 'Network interface "{{ $labels.device }}" changing it\'s up status often on node-exporter {{ $labels.namespace }}/{{ $labels.pod }}"',
},
expr: |||
changes(node_network_up{%(nodeExporterSelector)s,%(hostNetworkInterfaceSelector)s}[2m]) > 2

View File

@@ -0,0 +1,63 @@
{
prometheusAlerts+:: {
groups+: [
{
name: 'prometheus-operator',
rules: [
{
alert: 'PrometheusOperatorListErrors',
expr: |||
(sum by (controller,namespace) (rate(prometheus_operator_list_operations_failed_total{%(prometheusOperatorSelector)s}[10m])) / sum by (controller,namespace) (rate(prometheus_operator_list_operations_total{%(prometheusOperatorSelector)s}[10m]))) > 0.4
||| % $._config,
labels: {
severity: 'warning',
},
annotations: {
message: 'Errors while performing List operations in controller {{$labels.controller}} in {{$labels.namespace}} namespace.',
},
'for': '15m',
},
{
alert: 'PrometheusOperatorWatchErrors',
expr: |||
(sum by (controller,namespace) (rate(prometheus_operator_watch_operations_failed_total{%(prometheusOperatorSelector)s}[10m])) / sum by (controller,namespace) (rate(prometheus_operator_watch_operations_total{%(prometheusOperatorSelector)s}[10m]))) > 0.4
||| % $._config,
labels: {
severity: 'warning',
},
annotations: {
message: 'Errors while performing Watch operations in controller {{$labels.controller}} in {{$labels.namespace}} namespace.',
},
'for': '15m',
},
{
alert: 'PrometheusOperatorReconcileErrors',
expr: |||
rate(prometheus_operator_reconcile_errors_total{%(prometheusOperatorSelector)s}[5m]) > 0.1
||| % $._config,
labels: {
severity: 'warning',
},
annotations: {
message: 'Errors while reconciling {{ $labels.controller }} in {{ $labels.namespace }} Namespace.',
},
'for': '10m',
},
{
alert: 'PrometheusOperatorNodeLookupErrors',
expr: |||
rate(prometheus_operator_node_address_lookup_errors_total{%(prometheusOperatorSelector)s}[5m]) > 0.1
||| % $._config,
labels: {
severity: 'warning',
},
annotations: {
message: 'Errors while reconciling Prometheus in {{ $labels.namespace }} Namespace.',
},
'for': '10m',
},
],
},
],
},
}

View File

@@ -1,213 +0,0 @@
local defaults = {
local defaults = self,
namespace: error 'must provide namespace',
image: error 'must provide image',
version: error 'must provide version',
resources: {
limits: { cpu: '100m', memory: '100Mi' },
requests: { cpu: '4m', memory: '100Mi' },
},
commonLabels:: {
'app.kubernetes.io/name': 'alertmanager',
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'alert-router',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
name: error 'must provide name',
config: {
global: {
resolve_timeout: '5m',
},
inhibit_rules: [{
source_match: {
severity: 'critical',
},
target_match_re: {
severity: 'warning|info',
},
equal: ['namespace', 'alertname'],
}, {
source_match: {
severity: 'warning',
},
target_match_re: {
severity: 'info',
},
equal: ['namespace', 'alertname'],
}],
route: {
group_by: ['namespace'],
group_wait: '30s',
group_interval: '5m',
repeat_interval: '12h',
receiver: 'Default',
routes: [
{ receiver: 'Watchdog', match: { alertname: 'Watchdog' } },
{ receiver: 'Critical', match: { severity: 'critical' } },
],
},
receivers: [
{ name: 'Default' },
{ name: 'Watchdog' },
{ name: 'Critical' },
],
},
replicas: 3,
mixin: {
ruleLabels: {},
_config: {
alertmanagerName: '{{ $labels.namespace }}/{{ $labels.pod}}',
alertmanagerClusterLabels: 'namespace,service',
alertmanagerSelector: 'job="alertmanager-' + defaults.name + '",namespace="' + defaults.namespace + '"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
};
function(params) {
local am = self,
_config:: defaults + params,
// Safety check
assert std.isObject(am._config.resources),
assert std.isObject(am._config.mixin._config),
mixin:: (import 'github.com/prometheus/alertmanager/doc/alertmanager-mixin/mixin.libsonnet') +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') {
_config+:: am._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: am._config.commonLabels + am._config.mixin.ruleLabels,
name: 'alertmanager-' + am._config.name + '-rules',
namespace: am._config.namespace,
},
spec: {
local r = if std.objectHasAll(am.mixin, 'prometheusRules') then am.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(am.mixin, 'prometheusAlerts') then am.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
secret: {
apiVersion: 'v1',
kind: 'Secret',
type: 'Opaque',
metadata: {
name: 'alertmanager-' + am._config.name,
namespace: am._config.namespace,
labels: { alertmanager: am._config.name } + am._config.commonLabels,
},
stringData: {
'alertmanager.yaml': if std.type(am._config.config) == 'object'
then
std.manifestYamlDoc(am._config.config)
else
am._config.config,
},
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: 'alertmanager-' + am._config.name,
namespace: am._config.namespace,
labels: { alertmanager: am._config.name } + am._config.commonLabels,
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: 'alertmanager-' + am._config.name,
namespace: am._config.namespace,
labels: { alertmanager: am._config.name } + am._config.commonLabels,
},
spec: {
ports: [
{ name: 'web', targetPort: 'web', port: 9093 },
],
selector: {
app: 'alertmanager',
alertmanager: am._config.name,
} + am._config.selectorLabels,
sessionAffinity: 'ClientIP',
},
},
serviceMonitor: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'alertmanager',
namespace: am._config.namespace,
labels: am._config.commonLabels,
},
spec: {
selector: {
matchLabels: {
alertmanager: am._config.name,
} + am._config.selectorLabels,
},
endpoints: [
{ port: 'web', interval: '30s' },
],
},
},
[if (defaults + params).replicas > 1 then 'podDisruptionBudget']: {
apiVersion: 'policy/v1beta1',
kind: 'PodDisruptionBudget',
metadata: {
name: 'alertmanager-' + am._config.name,
namespace: am._config.namespace,
labels: am._config.commonLabels,
},
spec: {
maxUnavailable: 1,
selector: {
matchLabels: {
alertmanager: am._config.name,
} + am._config.selectorLabels,
},
},
},
alertmanager: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'Alertmanager',
metadata: {
name: am._config.name,
namespace: am._config.namespace,
labels: {
alertmanager: am._config.name,
} + am._config.commonLabels,
},
spec: {
replicas: am._config.replicas,
version: am._config.version,
image: am._config.image,
podMetadata: {
labels: am._config.commonLabels,
},
resources: am._config.resources,
nodeSelector: { 'kubernetes.io/os': 'linux' },
serviceAccountName: 'alertmanager-' + am._config.name,
securityContext: {
runAsUser: 1000,
runAsNonRoot: true,
fsGroup: 2000,
},
},
},
}

View File

@@ -1,287 +0,0 @@
local krp = import './kube-rbac-proxy.libsonnet';
local defaults = {
local defaults = self,
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide version',
resources: {
requests: { cpu: '10m', memory: '20Mi' },
limits: { cpu: '20m', memory: '40Mi' },
},
commonLabels:: {
'app.kubernetes.io/name': 'blackbox-exporter',
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'exporter',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
configmapReloaderImage: 'jimmidyson/configmap-reload:v0.5.0',
port: 9115,
internalPort: 19115,
replicas: 1,
modules: {
http_2xx: {
prober: 'http',
http: {
preferred_ip_protocol: 'ip4',
},
},
http_post_2xx: {
prober: 'http',
http: {
method: 'POST',
preferred_ip_protocol: 'ip4',
},
},
tcp_connect: {
prober: 'tcp',
tcp: {
preferred_ip_protocol: 'ip4',
},
},
pop3s_banner: {
prober: 'tcp',
tcp: {
query_response: [
{ expect: '^+OK' },
],
tls: true,
tls_config: {
insecure_skip_verify: false,
},
preferred_ip_protocol: 'ip4',
},
},
ssh_banner: {
prober: 'tcp',
tcp: {
query_response: [
{ expect: '^SSH-2.0-' },
],
preferred_ip_protocol: 'ip4',
},
},
irc_banner: {
prober: 'tcp',
tcp: {
query_response: [
{ send: 'NICK prober' },
{ send: 'USER prober prober prober :prober' },
{ expect: 'PING :([^ ]+)', send: 'PONG ${1}' },
{ expect: '^:[^ ]+ 001' },
],
preferred_ip_protocol: 'ip4',
},
},
},
privileged:
local icmpModules = [self.modules[m] for m in std.objectFields(self.modules) if self.modules[m].prober == 'icmp'];
std.length(icmpModules) > 0,
};
function(params) {
local bb = self,
_config:: defaults + params,
// Safety check
assert std.isObject(bb._config.resources),
configuration: {
apiVersion: 'v1',
kind: 'ConfigMap',
metadata: {
name: 'blackbox-exporter-configuration',
namespace: bb._config.namespace,
labels: bb._config.commonLabels,
},
data: {
'config.yml': std.manifestYamlDoc({ modules: bb._config.modules }),
},
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: 'blackbox-exporter',
namespace: bb._config.namespace,
},
},
clusterRole: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'blackbox-exporter',
},
rules: [
{
apiGroups: ['authentication.k8s.io'],
resources: ['tokenreviews'],
verbs: ['create'],
},
{
apiGroups: ['authorization.k8s.io'],
resources: ['subjectaccessreviews'],
verbs: ['create'],
},
],
},
clusterRoleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'blackbox-exporter',
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'blackbox-exporter',
},
subjects: [{
kind: 'ServiceAccount',
name: 'blackbox-exporter',
namespace: bb._config.namespace,
}],
},
deployment:
local blackboxExporter = {
name: 'blackbox-exporter',
image: bb._config.image,
args: [
'--config.file=/etc/blackbox_exporter/config.yml',
'--web.listen-address=:%d' % bb._config.internalPort,
],
ports: [{
name: 'http',
containerPort: bb._config.internalPort,
}],
resources: bb._config.resources,
securityContext: if bb._config.privileged then {
runAsNonRoot: false,
capabilities: { drop: ['ALL'], add: ['NET_RAW'] },
} else {
runAsNonRoot: true,
runAsUser: 65534,
},
volumeMounts: [{
mountPath: '/etc/blackbox_exporter/',
name: 'config',
readOnly: true,
}],
};
local reloader = {
name: 'module-configmap-reloader',
image: bb._config.configmapReloaderImage,
args: [
'--webhook-url=http://localhost:%d/-/reload' % bb._config.internalPort,
'--volume-dir=/etc/blackbox_exporter/',
],
resources: bb._config.resources,
securityContext: { runAsNonRoot: true, runAsUser: 65534 },
terminationMessagePath: '/dev/termination-log',
terminationMessagePolicy: 'FallbackToLogsOnError',
volumeMounts: [{
mountPath: '/etc/blackbox_exporter/',
name: 'config',
readOnly: true,
}],
};
local kubeRbacProxy = krp({
name: 'kube-rbac-proxy',
upstream: 'http://127.0.0.1:' + bb._config.internalPort + '/',
secureListenAddress: ':' + bb._config.port,
ports: [
{ name: 'https', containerPort: bb._config.port },
],
});
{
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name: 'blackbox-exporter',
namespace: bb._config.namespace,
labels: bb._config.commonLabels,
},
spec: {
replicas: bb._config.replicas,
selector: { matchLabels: bb._config.selectorLabels },
template: {
metadata: {
labels: bb._config.commonLabels,
annotations: {
'kubectl.kubernetes.io/default-container': blackboxExporter.name,
},
},
spec: {
containers: [blackboxExporter, reloader, kubeRbacProxy],
nodeSelector: { 'kubernetes.io/os': 'linux' },
serviceAccountName: 'blackbox-exporter',
volumes: [{
name: 'config',
configMap: { name: 'blackbox-exporter-configuration' },
}],
},
},
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: 'blackbox-exporter',
namespace: bb._config.namespace,
labels: bb._config.commonLabels,
},
spec: {
ports: [{
name: 'https',
port: bb._config.port,
targetPort: 'https',
}, {
name: 'probe',
port: bb._config.internalPort,
targetPort: 'http',
}],
selector: bb._config.selectorLabels,
},
},
serviceMonitor:
{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'blackbox-exporter',
namespace: bb._config.namespace,
labels: bb._config.commonLabels,
},
spec: {
endpoints: [{
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
interval: '30s',
path: '/metrics',
port: 'https',
scheme: 'https',
tlsConfig: {
insecureSkipVerify: true,
},
}],
selector: {
matchLabels: bb._config.selectorLabels,
},
},
},
}

View File

@@ -1,103 +0,0 @@
local defaults = {
local defaults = self,
name: 'grafana',
namespace: error 'must provide namespace',
version: error 'must provide version',
// image: error 'must provide image',
imageRepos: 'grafana/grafana',
resources: {
requests: { cpu: '100m', memory: '100Mi' },
limits: { cpu: '200m', memory: '200Mi' },
},
commonLabels:: {
'app.kubernetes.io/name': defaults.name,
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'grafana',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
prometheusName: error 'must provide prometheus name',
dashboards: {},
// TODO(paulfantom): expose those to have a stable API. After kubernetes-grafana refactor those could probably be removed.
rawDashboards: {},
folderDashboards: {},
containers: [],
datasources: [],
config: {},
plugins: [],
};
function(params) {
local g = self,
_config:: defaults + params,
// Safety check
assert std.isObject(g._config.resources),
local glib = (import 'github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet') + {
_config+:: {
namespace: g._config.namespace,
versions+:: {
grafana: g._config.version,
},
imageRepos+:: {
grafana: g._config.imageRepos,
},
prometheus+:: {
name: g._config.prometheusName,
},
grafana+:: {
labels: g._config.commonLabels,
dashboards: g._config.dashboards,
resources: g._config.resources,
rawDashboards: g._config.rawDashboards,
folderDashboards: g._config.folderDashboards,
containers: g._config.containers,
config+: g._config.config,
plugins+: g._config.plugins,
} + (
// Conditionally overwrite default setting.
if std.length(g._config.datasources) > 0 then
{ datasources: g._config.datasources }
else {}
),
},
},
// Add object only if user passes config and config is not empty
[if std.objectHas(params, 'config') && std.length(params.config) > 0 then 'config']: glib.grafana.config,
service: glib.grafana.service,
serviceAccount: glib.grafana.serviceAccount,
deployment: glib.grafana.deployment,
dashboardDatasources: glib.grafana.dashboardDatasources,
dashboardSources: glib.grafana.dashboardSources,
dashboardDefinitions: if std.length(g._config.dashboards) > 0 then {
apiVersion: 'v1',
kind: 'ConfigMapList',
items: glib.grafana.dashboardDefinitions,
},
serviceMonitor: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'grafana',
namespace: g._config.namespace,
labels: g._config.commonLabels,
},
spec: {
selector: {
matchLabels: {
'app.kubernetes.io/name': 'grafana',
},
},
endpoints: [{
port: 'http',
interval: '15s',
}],
},
},
}

View File

@@ -1,262 +0,0 @@
local relabelings = import '../addons/dropping-deprecated-metrics-relabelings.libsonnet';
local defaults = {
namespace: error 'must provide namespace',
commonLabels:: {
'app.kubernetes.io/name': 'kube-prometheus',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
mixin: {
ruleLabels: {},
_config: {
cadvisorSelector: 'job="kubelet", metrics_path="/metrics/cadvisor"',
kubeletSelector: 'job="kubelet", metrics_path="/metrics"',
kubeStateMetricsSelector: 'job="kube-state-metrics"',
nodeExporterSelector: 'job="node-exporter"',
kubeSchedulerSelector: 'job="kube-scheduler"',
kubeControllerManagerSelector: 'job="kube-controller-manager"',
kubeApiserverSelector: 'job="apiserver"',
podLabel: 'pod',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
diskDeviceSelector: 'device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"',
hostNetworkInterfaceSelector: 'device!~"veth.+"',
},
},
};
function(params) {
local k8s = self,
_config:: defaults + params,
mixin:: (import 'github.com/kubernetes-monitoring/kubernetes-mixin/mixin.libsonnet') {
_config+:: k8s._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: k8s._config.commonLabels + k8s._config.mixin.ruleLabels,
name: 'kubernetes-monitoring-rules',
namespace: k8s._config.namespace,
},
spec: {
local r = if std.objectHasAll(k8s.mixin, 'prometheusRules') then k8s.mixin.prometheusRules.groups else {},
local a = if std.objectHasAll(k8s.mixin, 'prometheusAlerts') then k8s.mixin.prometheusAlerts.groups else {},
groups: a + r,
},
},
serviceMonitorKubeScheduler: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'kube-scheduler',
namespace: k8s._config.namespace,
labels: { 'app.kubernetes.io/name': 'kube-scheduler' },
},
spec: {
jobLabel: 'app.kubernetes.io/name',
endpoints: [{
port: 'https-metrics',
interval: '30s',
scheme: 'https',
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
tlsConfig: { insecureSkipVerify: true },
}],
selector: {
matchLabels: { 'app.kubernetes.io/name': 'kube-scheduler' },
},
namespaceSelector: {
matchNames: ['kube-system'],
},
},
},
serviceMonitorKubelet: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'kubelet',
namespace: k8s._config.namespace,
labels: { 'app.kubernetes.io/name': 'kubelet' },
},
spec: {
jobLabel: 'app.kubernetes.io/name',
endpoints: [
{
port: 'https-metrics',
scheme: 'https',
interval: '30s',
honorLabels: true,
tlsConfig: { insecureSkipVerify: true },
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
metricRelabelings: relabelings,
relabelings: [{
sourceLabels: ['__metrics_path__'],
targetLabel: 'metrics_path',
}],
},
{
port: 'https-metrics',
scheme: 'https',
path: '/metrics/cadvisor',
interval: '30s',
honorLabels: true,
honorTimestamps: false,
tlsConfig: {
insecureSkipVerify: true,
},
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [{
sourceLabels: ['__metrics_path__'],
targetLabel: 'metrics_path',
}],
metricRelabelings: [
// Drop a bunch of metrics which are disabled but still sent, see
// https://github.com/google/cadvisor/issues/1925.
{
sourceLabels: ['__name__'],
regex: 'container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)',
action: 'drop',
},
],
},
{
port: 'https-metrics',
scheme: 'https',
path: '/metrics/probes',
interval: '30s',
honorLabels: true,
tlsConfig: { insecureSkipVerify: true },
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [{
sourceLabels: ['__metrics_path__'],
targetLabel: 'metrics_path',
}],
},
],
selector: {
matchLabels: { 'app.kubernetes.io/name': 'kubelet' },
},
namespaceSelector: {
matchNames: ['kube-system'],
},
},
},
serviceMonitorKubeControllerManager: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'kube-controller-manager',
namespace: k8s._config.namespace,
labels: { 'app.kubernetes.io/name': 'kube-controller-manager' },
},
spec: {
jobLabel: 'app.kubernetes.io/name',
endpoints: [{
port: 'https-metrics',
interval: '30s',
scheme: 'https',
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
tlsConfig: {
insecureSkipVerify: true,
},
metricRelabelings: relabelings + [
{
sourceLabels: ['__name__'],
regex: 'etcd_(debugging|disk|request|server).*',
action: 'drop',
},
],
}],
selector: {
matchLabels: { 'app.kubernetes.io/name': 'kube-controller-manager' },
},
namespaceSelector: {
matchNames: ['kube-system'],
},
},
},
serviceMonitorApiserver: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'kube-apiserver',
namespace: k8s._config.namespace,
labels: { 'app.kubernetes.io/name': 'apiserver' },
},
spec: {
jobLabel: 'component',
selector: {
matchLabels: {
component: 'apiserver',
provider: 'kubernetes',
},
},
namespaceSelector: {
matchNames: ['default'],
},
endpoints: [{
port: 'https',
interval: '30s',
scheme: 'https',
tlsConfig: {
caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt',
serverName: 'kubernetes',
},
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
metricRelabelings: relabelings + [
{
sourceLabels: ['__name__'],
regex: 'etcd_(debugging|disk|server).*',
action: 'drop',
},
{
sourceLabels: ['__name__'],
regex: 'apiserver_admission_controller_admission_latencies_seconds_.*',
action: 'drop',
},
{
sourceLabels: ['__name__'],
regex: 'apiserver_admission_step_admission_latencies_seconds_.*',
action: 'drop',
},
{
sourceLabels: ['__name__', 'le'],
regex: 'apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)',
action: 'drop',
},
],
}],
},
},
serviceMonitorCoreDNS: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'coredns',
namespace: k8s._config.namespace,
labels: { 'app.kubernetes.io/name': 'coredns' },
},
spec: {
jobLabel: 'app.kubernetes.io/name',
selector: {
matchLabels: { 'app.kubernetes.io/name': 'kube-dns' },
},
namespaceSelector: {
matchNames: ['kube-system'],
},
endpoints: [{
port: 'metrics',
interval: '15s',
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
}],
},
},
}

View File

@@ -1,63 +0,0 @@
local defaults = {
namespace: error 'must provide namespace',
image: 'quay.io/brancz/kube-rbac-proxy:v0.8.0',
ports: error 'must provide ports',
secureListenAddress: error 'must provide secureListenAddress',
upstream: error 'must provide upstream',
resources: {
requests: { cpu: '10m', memory: '20Mi' },
limits: { cpu: '20m', memory: '40Mi' },
},
tlsCipherSuites: [
'TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256', // required by h2: http://golang.org/cl/30721
'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256', // required by h2: http://golang.org/cl/30721
// 'TLS_RSA_WITH_RC4_128_SHA', // insecure: https://access.redhat.com/security/cve/cve-2013-2566
// 'TLS_RSA_WITH_3DES_EDE_CBC_SHA', // insecure: https://access.redhat.com/articles/2548661
// 'TLS_RSA_WITH_AES_128_CBC_SHA', // disabled by h2
// 'TLS_RSA_WITH_AES_256_CBC_SHA', // disabled by h2
// 'TLS_RSA_WITH_AES_128_CBC_SHA256', // insecure: https://access.redhat.com/security/cve/cve-2013-0169
// 'TLS_RSA_WITH_AES_128_GCM_SHA256', // disabled by h2
// 'TLS_RSA_WITH_AES_256_GCM_SHA384', // disabled by h2
// 'TLS_ECDHE_ECDSA_WITH_RC4_128_SHA', // insecure: https://access.redhat.com/security/cve/cve-2013-2566
// 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA', // disabled by h2
// 'TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA', // disabled by h2
// 'TLS_ECDHE_RSA_WITH_RC4_128_SHA', // insecure: https://access.redhat.com/security/cve/cve-2013-2566
// 'TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA', // insecure: https://access.redhat.com/articles/2548661
// 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA', // disabled by h2
// 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA', // disabled by h2
// 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256', // insecure: https://access.redhat.com/security/cve/cve-2013-0169
// 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256', // insecure: https://access.redhat.com/security/cve/cve-2013-0169
// disabled by h2 means: https://github.com/golang/net/blob/e514e69ffb8bc3c76a71ae40de0118d794855992/http2/ciphers.go
'TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384',
'TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384',
'TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305',
'TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305',
],
};
function(params) {
local krp = self,
_config:: defaults + params,
// Safety check
assert std.isObject(krp._config.resources),
name: krp._config.name,
image: krp._config.image,
args: [
'--logtostderr',
'--secure-listen-address=' + krp._config.secureListenAddress,
'--tls-cipher-suites=' + std.join(',', krp._config.tlsCipherSuites),
'--upstream=' + krp._config.upstream,
],
resources: krp._config.resources,
ports: krp._config.ports,
securityContext: {
runAsUser: 65532,
runAsGroup: 65532,
runAsNonRoot: true,
},
}

View File

@@ -1,171 +0,0 @@
local krp = import './kube-rbac-proxy.libsonnet';
local defaults = {
local defaults = self,
name: 'kube-state-metrics',
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide version',
resources: {
requests: { cpu: '10m', memory: '190Mi' },
limits: { cpu: '100m', memory: '250Mi' },
},
scrapeInterval: '30s',
scrapeTimeout: '30s',
commonLabels:: {
'app.kubernetes.io/name': defaults.name,
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'exporter',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
mixin: {
ruleLabels: {},
_config: {
kubeStateMetricsSelector: 'job="' + defaults.name + '"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
};
function(params) (import 'github.com/kubernetes/kube-state-metrics/jsonnet/kube-state-metrics/kube-state-metrics.libsonnet') {
local ksm = self,
_config:: defaults + params,
// Safety check
assert std.isObject(ksm._config.resources),
assert std.isObject(ksm._config.mixin._config),
name:: ksm._config.name,
namespace:: ksm._config.namespace,
version:: ksm._config.version,
image:: ksm._config.image,
commonLabels:: ksm._config.commonLabels,
podLabels:: ksm._config.selectorLabels,
mixin:: (import 'github.com/kubernetes/kube-state-metrics/jsonnet/kube-state-metrics-mixin/mixin.libsonnet') +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') {
_config+:: ksm._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: ksm._config.commonLabels + ksm._config.mixin.ruleLabels,
name: ksm._config.name + '-rules',
namespace: ksm._config.namespace,
},
spec: {
local r = if std.objectHasAll(ksm.mixin, 'prometheusRules') then ksm.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(ksm.mixin, 'prometheusAlerts') then ksm.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
service+: {
spec+: {
ports: [
{
name: 'https-main',
port: 8443,
targetPort: 'https-main',
},
{
name: 'https-self',
port: 9443,
targetPort: 'https-self',
},
],
},
},
local kubeRbacProxyMain = krp({
name: 'kube-rbac-proxy-main',
upstream: 'http://127.0.0.1:8081/',
secureListenAddress: ':8443',
ports: [
{ name: 'https-main', containerPort: 8443 },
],
resources+: {
limits+: { cpu: '40m' },
requests+: { cpu: '20m' },
},
}),
local kubeRbacProxySelf = krp({
name: 'kube-rbac-proxy-self',
upstream: 'http://127.0.0.1:8082/',
secureListenAddress: ':9443',
ports: [
{ name: 'https-self', containerPort: 9443 },
],
}),
deployment+: {
spec+: {
template+: {
metadata+: {
annotations+: {
'kubectl.kubernetes.io/default-container': 'kube-state-metrics',
},
},
spec+: {
containers: std.map(function(c) c {
ports:: null,
livenessProbe:: null,
readinessProbe:: null,
args: ['--host=127.0.0.1', '--port=8081', '--telemetry-host=127.0.0.1', '--telemetry-port=8082'],
resources: ksm._config.resources,
}, super.containers) + [kubeRbacProxyMain, kubeRbacProxySelf],
},
},
},
},
serviceMonitor:
{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: ksm.name,
namespace: ksm._config.namespace,
labels: ksm._config.commonLabels,
},
spec: {
jobLabel: 'app.kubernetes.io/name',
selector: { matchLabels: ksm._config.selectorLabels },
endpoints: [
{
port: 'https-main',
scheme: 'https',
interval: ksm._config.scrapeInterval,
scrapeTimeout: ksm._config.scrapeTimeout,
honorLabels: true,
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [
{
regex: '(pod|service|endpoint|namespace)',
action: 'labeldrop',
},
],
tlsConfig: {
insecureSkipVerify: true,
},
},
{
port: 'https-self',
scheme: 'https',
interval: ksm._config.scrapeInterval,
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
tlsConfig: {
insecureSkipVerify: true,
},
},
],
},
},
}

View File

@@ -1,2 +0,0 @@
(import 'general.libsonnet') +
(import 'node.libsonnet')

View File

@@ -1,44 +0,0 @@
local defaults = {
name: 'kube-prometheus',
namespace: error 'must provide namespace',
commonLabels:: {
'app.kubernetes.io/name': 'kube-prometheus',
'app.kubernetes.io/component': 'exporter',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
mixin: {
ruleLabels: {},
_config: {
nodeExporterSelector: 'job="node-exporter"',
hostNetworkInterfaceSelector: 'device!~"veth.+"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
};
function(params) {
local m = self,
_config:: defaults + params,
local alertsandrules = (import './alerts/alerts.libsonnet') + (import './rules/rules.libsonnet'),
mixin:: alertsandrules +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') {
_config+:: m._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: m._config.commonLabels + m._config.mixin.ruleLabels,
name: m._config.name + '-rules',
namespace: m._config.namespace,
},
spec: {
local r = if std.objectHasAll(m.mixin, 'prometheusRules') then m.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(m.mixin, 'prometheusAlerts') then m.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
}

View File

@@ -1,248 +0,0 @@
local krp = import './kube-rbac-proxy.libsonnet';
local defaults = {
local defaults = self,
name: 'node-exporter',
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide version',
resources: {
requests: { cpu: '102m', memory: '180Mi' },
limits: { cpu: '250m', memory: '180Mi' },
},
listenAddress: '127.0.0.1',
port: 9100,
commonLabels:: {
'app.kubernetes.io/name': defaults.name,
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'exporter',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
mixin: {
ruleLabels: {},
_config: {
nodeExporterSelector: 'job="' + defaults.name + '"',
fsSpaceFillingUpCriticalThreshold: 15,
diskDeviceSelector: 'device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
};
function(params) {
local ne = self,
_config:: defaults + params,
// Safety check
assert std.isObject(ne._config.resources),
assert std.isObject(ne._config.mixin._config),
mixin:: (import 'github.com/prometheus/node_exporter/docs/node-mixin/mixin.libsonnet') +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') {
_config+:: ne._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: ne._config.commonLabels + ne._config.mixin.ruleLabels,
name: ne._config.name + '-rules',
namespace: ne._config.namespace,
},
spec: {
local r = if std.objectHasAll(ne.mixin, 'prometheusRules') then ne.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(ne.mixin, 'prometheusAlerts') then ne.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
clusterRoleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: ne._config.name,
labels: ne._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: ne._config.name,
},
subjects: [{
kind: 'ServiceAccount',
name: ne._config.name,
namespace: ne._config.namespace,
}],
},
clusterRole: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: ne._config.name,
labels: ne._config.commonLabels,
},
rules: [
{
apiGroups: ['authentication.k8s.io'],
resources: ['tokenreviews'],
verbs: ['create'],
},
{
apiGroups: ['authorization.k8s.io'],
resources: ['subjectaccessreviews'],
verbs: ['create'],
},
],
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: ne._config.name,
namespace: ne._config.namespace,
labels: ne._config.commonLabels,
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: ne._config.name,
namespace: ne._config.namespace,
labels: ne._config.commonLabels,
},
spec: {
ports: [
{ name: 'https', targetPort: 'https', port: ne._config.port },
],
selector: ne._config.selectorLabels,
clusterIP: 'None',
},
},
serviceMonitor: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: ne._config.name,
namespace: ne._config.namespace,
labels: ne._config.commonLabels,
},
spec: {
jobLabel: 'app.kubernetes.io/name',
selector: {
matchLabels: ne._config.selectorLabels,
},
endpoints: [{
port: 'https',
scheme: 'https',
interval: '15s',
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
relabelings: [
{
action: 'replace',
regex: '(.*)',
replacement: '$1',
sourceLabels: ['__meta_kubernetes_pod_node_name'],
targetLabel: 'instance',
},
],
tlsConfig: {
insecureSkipVerify: true,
},
}],
},
},
daemonset:
local nodeExporter = {
name: ne._config.name,
image: ne._config.image,
args: [
'--web.listen-address=' + std.join(':', [ne._config.listenAddress, std.toString(ne._config.port)]),
'--path.sysfs=/host/sys',
'--path.rootfs=/host/root',
'--no-collector.wifi',
'--no-collector.hwmon',
'--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)',
'--collector.netclass.ignored-devices=^(veth.*)$',
'--collector.netdev.device-exclude=^(veth.*)$',
],
volumeMounts: [
{ name: 'sys', mountPath: '/host/sys', mountPropagation: 'HostToContainer', readOnly: true },
{ name: 'root', mountPath: '/host/root', mountPropagation: 'HostToContainer', readOnly: true },
],
resources: ne._config.resources,
};
local kubeRbacProxy = krp({
name: 'kube-rbac-proxy',
//image: krpImage,
upstream: 'http://127.0.0.1:' + ne._config.port + '/',
secureListenAddress: '[$(IP)]:' + ne._config.port,
// Keep `hostPort` here, rather than in the node-exporter container
// because Kubernetes mandates that if you define a `hostPort` then
// `containerPort` must match. In our case, we are splitting the
// host port and container port between the two containers.
// We'll keep the port specification here so that the named port
// used by the service is tied to the proxy container. We *could*
// forgo declaring the host port, however it is important to declare
// it so that the scheduler can decide if the pod is schedulable.
ports: [
{ name: 'https', containerPort: ne._config.port, hostPort: ne._config.port },
],
}) + {
env: [
{ name: 'IP', valueFrom: { fieldRef: { fieldPath: 'status.podIP' } } },
],
};
{
apiVersion: 'apps/v1',
kind: 'DaemonSet',
metadata: {
name: ne._config.name,
namespace: ne._config.namespace,
labels: ne._config.commonLabels,
},
spec: {
selector: { matchLabels: ne._config.selectorLabels },
updateStrategy: {
type: 'RollingUpdate',
rollingUpdate: { maxUnavailable: '10%' },
},
template: {
metadata: { labels: ne._config.commonLabels },
spec: {
nodeSelector: { 'kubernetes.io/os': 'linux' },
tolerations: [{
operator: 'Exists',
}],
containers: [nodeExporter, kubeRbacProxy],
volumes: [
{ name: 'sys', hostPath: { path: '/sys' } },
{ name: 'root', hostPath: { path: '/' } },
],
serviceAccountName: ne._config.name,
securityContext: {
runAsUser: 65534,
runAsNonRoot: true,
},
hostPID: true,
hostNetwork: true,
},
},
},
},
}

View File

@@ -1,303 +0,0 @@
local defaults = {
local defaults = self,
name: 'prometheus-adapter',
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide image',
resources: {
requests: { cpu: '102m', memory: '180Mi' },
limits: { cpu: '250m', memory: '180Mi' },
},
replicas: 2,
listenAddress: '127.0.0.1',
port: 9100,
commonLabels:: {
'app.kubernetes.io/name': 'prometheus-adapter',
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'metrics-adapter',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
prometheusURL: error 'must provide prometheusURL',
config: {
resourceRules: {
cpu: {
containerQuery: 'sum(irate(container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!="",pod!=""}[5m])) by (<<.GroupBy>>)',
nodeQuery: 'sum(1 - irate(node_cpu_seconds_total{mode="idle"}[5m]) * on(namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:{<<.LabelMatchers>>}) by (<<.GroupBy>>) or sum (1- irate(windows_cpu_time_total{mode="idle", job="windows-exporter",<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)',
resources: {
overrides: {
node: { resource: 'node' },
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
containerLabel: 'container',
},
memory: {
containerQuery: 'sum(container_memory_working_set_bytes{<<.LabelMatchers>>,container!="",pod!=""}) by (<<.GroupBy>>)',
nodeQuery: 'sum(node_memory_MemTotal_bytes{job="node-exporter",<<.LabelMatchers>>} - node_memory_MemAvailable_bytes{job="node-exporter",<<.LabelMatchers>>}) by (<<.GroupBy>>) or sum(windows_cs_physical_memory_bytes{job="windows-exporter",<<.LabelMatchers>>} - windows_memory_available_bytes{job="windows-exporter",<<.LabelMatchers>>}) by (<<.GroupBy>>)',
resources: {
overrides: {
instance: { resource: 'node' },
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
containerLabel: 'container',
},
window: '5m',
},
},
};
function(params) {
local pa = self,
_config:: defaults + params,
// Safety check
assert std.isObject(pa._config.resources),
apiService: {
apiVersion: 'apiregistration.k8s.io/v1',
kind: 'APIService',
metadata: {
name: 'v1beta1.metrics.k8s.io',
labels: pa._config.commonLabels,
},
spec: {
service: {
name: $.service.metadata.name,
namespace: pa._config.namespace,
},
group: 'metrics.k8s.io',
version: 'v1beta1',
insecureSkipTLSVerify: true,
groupPriorityMinimum: 100,
versionPriority: 100,
},
},
configMap: {
apiVersion: 'v1',
kind: 'ConfigMap',
metadata: {
name: 'adapter-config',
namespace: pa._config.namespace,
labels: pa._config.commonLabels,
},
data: { 'config.yaml': std.manifestYamlDoc(pa._config.config) },
},
serviceMonitor: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: pa._config.name,
namespace: pa._config.namespace,
labels: pa._config.commonLabels,
},
spec: {
selector: {
matchLabels: pa._config.selectorLabels,
},
endpoints: [
{
port: 'https',
interval: '30s',
scheme: 'https',
tlsConfig: {
insecureSkipVerify: true,
},
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
},
],
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: pa._config.name,
namespace: pa._config.namespace,
labels: pa._config.commonLabels,
},
spec: {
ports: [
{ name: 'https', targetPort: 6443, port: 443 },
],
selector: pa._config.selectorLabels,
},
},
deployment:
local c = {
name: pa._config.name,
image: pa._config.image,
args: [
'--cert-dir=/var/run/serving-cert',
'--config=/etc/adapter/config.yaml',
'--logtostderr=true',
'--metrics-relist-interval=1m',
'--prometheus-url=' + pa._config.prometheusURL,
'--secure-port=6443',
],
ports: [{ containerPort: 6443 }],
volumeMounts: [
{ name: 'tmpfs', mountPath: '/tmp', readOnly: false },
{ name: 'volume-serving-cert', mountPath: '/var/run/serving-cert', readOnly: false },
{ name: 'config', mountPath: '/etc/adapter', readOnly: false },
],
};
{
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name: pa._config.name,
namespace: pa._config.namespace,
labels: pa._config.commonLabels,
},
spec: {
replicas: pa._config.replicas,
selector: { matchLabels: pa._config.selectorLabels },
strategy: {
rollingUpdate: {
maxSurge: 1,
maxUnavailable: 1,
},
},
template: {
metadata: { labels: pa._config.commonLabels },
spec: {
containers: [c],
serviceAccountName: $.serviceAccount.metadata.name,
nodeSelector: { 'kubernetes.io/os': 'linux' },
volumes: [
{ name: 'tmpfs', emptyDir: {} },
{ name: 'volume-serving-cert', emptyDir: {} },
{ name: 'config', configMap: { name: 'adapter-config' } },
],
},
},
},
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: pa._config.name,
namespace: pa._config.namespace,
labels: pa._config.commonLabels,
},
},
clusterRole: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: pa._config.name,
labels: pa._config.commonLabels,
},
rules: [{
apiGroups: [''],
resources: ['nodes', 'namespaces', 'pods', 'services'],
verbs: ['get', 'list', 'watch'],
}],
},
clusterRoleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: pa._config.name,
labels: pa._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: $.clusterRole.metadata.name,
},
subjects: [{
kind: 'ServiceAccount',
name: $.serviceAccount.metadata.name,
namespace: pa._config.namespace,
}],
},
clusterRoleBindingDelegator: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'resource-metrics:system:auth-delegator',
labels: pa._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'system:auth-delegator',
},
subjects: [{
kind: 'ServiceAccount',
name: $.serviceAccount.metadata.name,
namespace: pa._config.namespace,
}],
},
clusterRoleServerResources: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'resource-metrics-server-resources',
labels: pa._config.commonLabels,
},
rules: [{
apiGroups: ['metrics.k8s.io'],
resources: ['*'],
verbs: ['*'],
}],
},
clusterRoleAggregatedMetricsReader: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'system:aggregated-metrics-reader',
labels: {
'rbac.authorization.k8s.io/aggregate-to-admin': 'true',
'rbac.authorization.k8s.io/aggregate-to-edit': 'true',
'rbac.authorization.k8s.io/aggregate-to-view': 'true',
} + pa._config.commonLabels,
},
rules: [{
apiGroups: ['metrics.k8s.io'],
resources: ['pods', 'nodes'],
verbs: ['get', 'list', 'watch'],
}],
},
roleBindingAuthReader: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'resource-metrics-auth-reader',
namespace: 'kube-system',
labels: pa._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'extension-apiserver-authentication-reader',
},
subjects: [{
kind: 'ServiceAccount',
name: $.serviceAccount.metadata.name,
namespace: pa._config.namespace,
}],
},
}

View File

@@ -1,128 +0,0 @@
local krp = import './kube-rbac-proxy.libsonnet';
local prometheusOperator = import 'github.com/prometheus-operator/prometheus-operator/jsonnet/prometheus-operator/prometheus-operator.libsonnet';
local defaults = {
local defaults = self,
name: 'prometheus-operator',
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide image',
configReloaderImage: error 'must provide config reloader image',
resources: {
limits: { cpu: '200m', memory: '200Mi' },
requests: { cpu: '100m', memory: '100Mi' },
},
commonLabels:: {
'app.kubernetes.io/name': defaults.name,
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'controller',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
},
mixin: {
ruleLabels: {
role: 'alert-rules',
prometheus: defaults.name,
},
_config: {
prometheusOperatorSelector: 'job="prometheus-operator",namespace="' + defaults.namespace + '"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
};
function(params)
local config = defaults + params;
// Safety check
assert std.isObject(config.resources);
prometheusOperator(config) {
local po = self,
// declare variable as a field to allow overriding options and to have unified API across all components
_config:: config,
mixin:: (import 'github.com/prometheus-operator/prometheus-operator/jsonnet/mixin/mixin.libsonnet') +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') {
_config+:: po._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: po._config.commonLabels + po._config.mixin.ruleLabels,
name: po._config.name + '-rules',
namespace: po._config.namespace,
},
spec: {
local r = if std.objectHasAll(po.mixin, 'prometheusRules') then po.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(po.mixin, 'prometheusAlerts') then po.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
service+: {
spec+: {
ports: [
{
name: 'https',
port: 8443,
targetPort: 'https',
},
],
},
},
serviceMonitor+: {
spec+: {
endpoints: [
{
port: 'https',
scheme: 'https',
honorLabels: true,
bearerTokenFile: '/var/run/secrets/kubernetes.io/serviceaccount/token',
tlsConfig: {
insecureSkipVerify: true,
},
},
],
},
},
clusterRole+: {
rules+: [
{
apiGroups: ['authentication.k8s.io'],
resources: ['tokenreviews'],
verbs: ['create'],
},
{
apiGroups: ['authorization.k8s.io'],
resources: ['subjectaccessreviews'],
verbs: ['create'],
},
],
},
local kubeRbacProxy = krp({
name: 'kube-rbac-proxy',
upstream: 'http://127.0.0.1:8080/',
secureListenAddress: ':8443',
ports: [
{ name: 'https', containerPort: 8443 },
],
}),
deployment+: {
spec+: {
template+: {
spec+: {
containers+: [kubeRbacProxy],
},
},
},
},
}

View File

@@ -1,376 +0,0 @@
local defaults = {
local defaults = self,
namespace: error 'must provide namespace',
version: error 'must provide version',
image: error 'must provide image',
resources: {
requests: { memory: '400Mi' },
},
name: error 'must provide name',
alertmanagerName: error 'must provide alertmanagerName',
namespaces: ['default', 'kube-system', defaults.namespace],
replicas: 2,
externalLabels: {},
commonLabels:: {
'app.kubernetes.io/name': 'prometheus',
'app.kubernetes.io/version': defaults.version,
'app.kubernetes.io/component': 'prometheus',
'app.kubernetes.io/part-of': 'kube-prometheus',
},
selectorLabels:: {
[labelName]: defaults.commonLabels[labelName]
for labelName in std.objectFields(defaults.commonLabels)
if !std.setMember(labelName, ['app.kubernetes.io/version'])
} + { prometheus: defaults.name },
ruleSelector: {
matchLabels: defaults.mixin.ruleLabels,
},
mixin: {
ruleLabels: {
role: 'alert-rules',
prometheus: defaults.name,
},
_config: {
prometheusSelector: 'job="prometheus-' + defaults.name + '",namespace="' + defaults.namespace + '"',
prometheusName: '{{$labels.namespace}}/{{$labels.pod}}',
thanosSelector: 'job="thanos-sidecar"',
runbookURLPattern: 'https://github.com/prometheus-operator/kube-prometheus/wiki/%s',
},
},
thanos: {},
};
function(params) {
local p = self,
_config:: defaults + params,
// Safety check
assert std.isObject(p._config.resources),
assert std.isObject(p._config.mixin._config),
mixin:: (import 'github.com/prometheus/prometheus/documentation/prometheus-mixin/mixin.libsonnet') +
(import 'github.com/kubernetes-monitoring/kubernetes-mixin/alerts/add-runbook-links.libsonnet') + (
if p._config.thanos != {} then
(import 'github.com/thanos-io/thanos/mixin/alerts/sidecar.libsonnet') + {
sidecar: {
selector: p._config.mixin._config.thanosSelector,
},
}
else {}
) {
_config+:: p._config.mixin._config,
},
prometheusRule: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
labels: p._config.commonLabels + p._config.mixin.ruleLabels,
name: 'prometheus-' + p._config.name + '-prometheus-rules',
namespace: p._config.namespace,
},
spec: {
local r = if std.objectHasAll(p.mixin, 'prometheusRules') then p.mixin.prometheusRules.groups else [],
local a = if std.objectHasAll(p.mixin, 'prometheusAlerts') then p.mixin.prometheusAlerts.groups else [],
groups: a + r,
},
},
serviceAccount: {
apiVersion: 'v1',
kind: 'ServiceAccount',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
labels: p._config.commonLabels,
},
},
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
labels: { prometheus: p._config.name } + p._config.commonLabels,
},
spec: {
ports: [
{ name: 'web', targetPort: 'web', port: 9090 },
] +
(
if p._config.thanos != {} then
[{ name: 'grpc', port: 10901, targetPort: 10901 }]
else []
),
selector: { app: 'prometheus' } + p._config.selectorLabels,
sessionAffinity: 'ClientIP',
},
},
roleBindingSpecificNamespaces:
local newSpecificRoleBinding(namespace) = {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: namespace,
labels: p._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'prometheus-' + p._config.name,
},
subjects: [{
kind: 'ServiceAccount',
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
}],
};
{
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBindingList',
items: [newSpecificRoleBinding(x) for x in p._config.namespaces],
},
clusterRole: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRole',
metadata: {
name: 'prometheus-' + p._config.name,
labels: p._config.commonLabels,
},
rules: [
{
apiGroups: [''],
resources: ['nodes/metrics'],
verbs: ['get'],
},
{
nonResourceURLs: ['/metrics'],
verbs: ['get'],
},
],
},
roleConfig: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'Role',
metadata: {
name: 'prometheus-' + p._config.name + '-config',
namespace: p._config.namespace,
labels: p._config.commonLabels,
},
rules: [{
apiGroups: [''],
resources: ['configmaps'],
verbs: ['get'],
}],
},
roleBindingConfig: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleBinding',
metadata: {
name: 'prometheus-' + p._config.name + '-config',
namespace: p._config.namespace,
labels: p._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'Role',
name: 'prometheus-' + p._config.name + '-config',
},
subjects: [{
kind: 'ServiceAccount',
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
}],
},
clusterRoleBinding: {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'ClusterRoleBinding',
metadata: {
name: 'prometheus-' + p._config.name,
labels: p._config.commonLabels,
},
roleRef: {
apiGroup: 'rbac.authorization.k8s.io',
kind: 'ClusterRole',
name: 'prometheus-' + p._config.name,
},
subjects: [{
kind: 'ServiceAccount',
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
}],
},
roleSpecificNamespaces:
local newSpecificRole(namespace) = {
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'Role',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: namespace,
labels: p._config.commonLabels,
},
rules: [
{
apiGroups: [''],
resources: ['services', 'endpoints', 'pods'],
verbs: ['get', 'list', 'watch'],
},
{
apiGroups: ['extensions'],
resources: ['ingresses'],
verbs: ['get', 'list', 'watch'],
},
{
apiGroups: ['networking.k8s.io'],
resources: ['ingresses'],
verbs: ['get', 'list', 'watch'],
},
],
};
{
apiVersion: 'rbac.authorization.k8s.io/v1',
kind: 'RoleList',
items: [newSpecificRole(x) for x in p._config.namespaces],
},
[if (defaults + params).replicas > 1 then 'podDisruptionBudget']: {
apiVersion: 'policy/v1beta1',
kind: 'PodDisruptionBudget',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
labels: p._config.commonLabels,
},
spec: {
minAvailable: 1,
selector: {
matchLabels: {
prometheus: p._config.name,
} + p._config.selectorLabels,
},
},
},
prometheus: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'Prometheus',
metadata: {
name: p._config.name,
namespace: p._config.namespace,
labels: { prometheus: p._config.name } + p._config.commonLabels,
},
spec: {
replicas: p._config.replicas,
version: p._config.version,
image: p._config.image,
podMetadata: {
labels: p._config.commonLabels,
},
externalLabels: p._config.externalLabels,
serviceAccountName: 'prometheus-' + p._config.name,
serviceMonitorSelector: {},
podMonitorSelector: {},
probeSelector: {},
serviceMonitorNamespaceSelector: {},
podMonitorNamespaceSelector: {},
probeNamespaceSelector: {},
nodeSelector: { 'kubernetes.io/os': 'linux' },
ruleSelector: p._config.ruleSelector,
resources: p._config.resources,
alerting: {
alertmanagers: [{
namespace: p._config.namespace,
name: 'alertmanager-' + p._config.alertmanagerName,
port: 'web',
apiVersion: 'v2',
}],
},
securityContext: {
runAsUser: 1000,
runAsNonRoot: true,
fsGroup: 2000,
},
[if std.objectHas(params, 'thanos') then 'thanos']: p._config.thanos,
},
},
serviceMonitor: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata: {
name: 'prometheus-' + p._config.name,
namespace: p._config.namespace,
labels: p._config.commonLabels,
},
spec: {
selector: {
matchLabels: p._config.selectorLabels,
},
endpoints: [{
port: 'web',
interval: '30s',
}],
},
},
// Include thanos sidecar Service only if thanos config was passed by user
[if std.objectHas(params, 'thanos') && std.length(params.thanos) > 0 then 'serviceThanosSidecar']: {
apiVersion: 'v1',
kind: 'Service',
metadata+: {
name: 'prometheus-' + p._config.name + '-thanos-sidecar',
namespace: p._config.namespace,
labels+: p._config.commonLabels {
prometheus: p._config.name,
'app.kubernetes.io/component': 'thanos-sidecar',
},
},
spec+: {
ports: [
{ name: 'grpc', port: 10901, targetPort: 10901 },
{ name: 'http', port: 10902, targetPort: 10902 },
],
selector: p._config.selectorLabels {
prometheus: p._config.name,
'app.kubernetes.io/component': 'prometheus',
},
clusterIP: 'None',
},
},
// Include thanos sidecar ServiceMonitor only if thanos config was passed by user
[if std.objectHas(params, 'thanos') && std.length(params.thanos) > 0 then 'serviceMonitorThanosSidecar']: {
apiVersion: 'monitoring.coreos.com/v1',
kind: 'ServiceMonitor',
metadata+: {
name: 'thanos-sidecar',
namespace: p._config.namespace,
labels: p._config.commonLabels {
prometheus: p._config.name,
'app.kubernetes.io/component': 'thanos-sidecar',
},
},
spec+: {
jobLabel: 'app.kubernetes.io/component',
selector: {
matchLabels: {
prometheus: p._config.name,
'app.kubernetes.io/component': 'thanos-sidecar',
},
},
endpoints: [{
port: 'http',
interval: '30s',
}],
},
},
}

Some files were not shown because too many files have changed in this diff Show More