Merge pull request #1206 from brancz/jsonnet
Convert kube-prometheus to jsonnet
This commit is contained in:
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
tmp/
|
4
Makefile
4
Makefile
@@ -5,7 +5,7 @@ image:
|
||||
|
||||
generate: image
|
||||
@echo ">> Compiling assets and generating Kubernetes manifests"
|
||||
docker run --rm -v `pwd`:/go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus --workdir /go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus po-jsonnet make generate-raw
|
||||
docker run --rm -u=$(shell id -u $(USER)):$(shell id -g $(USER)) -v `pwd`:/go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus --workdir /go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus po-jsonnet make generate-raw
|
||||
|
||||
generate-raw:
|
||||
./hack/scripts/generate-manifests.sh
|
||||
./hack/scripts/build-jsonnet.sh example-dist/base/kube-prometheus.jsonnet manifests
|
||||
|
68
README.md
68
README.md
@@ -1,5 +1,7 @@
|
||||
# kube-prometheus
|
||||
|
||||
> Note that everything in the `contrib/kube-prometheus/` directory is experimental and may change significantly at any time.
|
||||
|
||||
This repository collects Kubernetes manifests, [Grafana](http://grafana.com/) dashboards, and
|
||||
[Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/)
|
||||
combined with documentation and scripts to provide single-command deployments of end-to-end
|
||||
@@ -46,16 +48,15 @@ install
|
||||
Simply run:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=<path> # defaults to "~/.kube/config"
|
||||
cd contrib/kube-prometheus/
|
||||
hack/cluster-monitoring/deploy
|
||||
```
|
||||
|
||||
After all pods are ready, you can reach:
|
||||
After all pods are ready, you can reach each of the UIs by port-forwarding:
|
||||
|
||||
* Prometheus UI on node port `30900`
|
||||
* Alertmanager UI on node port `30903`
|
||||
* Grafana on node port `30902`
|
||||
* Prometheus UI on node port `kubectl -n monitoring port-forward prometheus-k8s-0 9090`
|
||||
* Alertmanager UI on node port `kubectl -n monitoring port-forward alertmanager-main-0 9093`
|
||||
* Grafana on node port `kubectl -n monitoring port-forward $(kubectl get pods -n monitoring -lapp=grafana -ojsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}') 3000`
|
||||
|
||||
To tear it all down again, run:
|
||||
|
||||
@@ -63,9 +64,53 @@ To tear it all down again, run:
|
||||
hack/cluster-monitoring/teardown
|
||||
```
|
||||
|
||||
## Customizing
|
||||
|
||||
As everyone's infrastructure is slightly different, different organizations have different requirements. Thereby there may be modifications you want to do on kube-prometheus to fit your needs.
|
||||
|
||||
The kube-prometheus stack is intended to be a jsonnet library for organizations to consume and use in their own infrastructure repository. Below is an example how it can be used to deploy the stack properly on minikube.
|
||||
|
||||
The three "distribution" examples we have assembled can be found in:
|
||||
|
||||
* `example-dist/base`: contains the plain kube-prometheus stack for organizations to build on.
|
||||
* `example-dist/kubeadm`: contains the kube-prometheus stack with slight modifications to work properly monitoring kubeadm clusters and exposes UIs on NodePorts for demonstration purposes.
|
||||
* `example-dist/bootkube`: contains the kube-prometheus stack with slight modifications to work properly on clusters created with bootkube.
|
||||
|
||||
The examples in `example-dist/` are purely meant for demonstration purposes, the `kube-prometheus.jsonnet` file should live in your organizations infrastructure repository and use the kube-prometheus library provided here.
|
||||
|
||||
Examples of additoinal modifications you may want to make could be adding an `Ingress` object for each of the UIs, but the point of this is that as opposed to other solutions out there, this library does not need to yield all possible customization options, it's all up to the user to customize!
|
||||
|
||||
### minikube kubeadm example
|
||||
|
||||
See `example-dist/kubeadm` for an example for deploying on minikube, using the minikube kubeadm bootstrapper. The `example-dist/kubeadm/kube-prometheus.jsonnet` file renders the kube-prometheus manifests using jsonnet and then merges the result with kubeadm specifics, such as information on how to monitor kube-controller-manager and kube-scheduler as created by kubeadm. In addition for demonstration purposes, it converts the services selecting Prometheus, Alertmanager and Grafana to NodePort services.
|
||||
|
||||
Let's give that a try, and create a minikube cluster:
|
||||
|
||||
```
|
||||
minikube delete && minikube start --kubernetes-version=v1.9.6 --memory=4096 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
|
||||
```
|
||||
|
||||
Then we can render the manifests for kubeadm (because we are using the minikube kubeadm bootstrapper):
|
||||
|
||||
```
|
||||
docker run --rm \
|
||||
-v `pwd`:/go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus \
|
||||
--workdir /go/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus \
|
||||
po-jsonnet \
|
||||
./hack/scripts/build-jsonnet.sh example-dist/kubeadm/kube-prometheus.jsonnet example-dist/kubeadm/manifests
|
||||
```
|
||||
|
||||
> Note the `po-jsonnet` docker image is built using [this Dockerfile](/scripts/jsonnet/Dockerfile), you can also build it using `make image` from the `contrib/kube-prometheus` folder.
|
||||
|
||||
Then the stack can be deployed using
|
||||
|
||||
```
|
||||
hack/cluster-monitoring/deploy example-dist/kubeadm
|
||||
```
|
||||
|
||||
## Monitoring custom services
|
||||
|
||||
The example manifests in [manifests/examples/example-app](/contrib/kube-prometheus/manifests/examples/example-app)
|
||||
The example manifests in [examples/example-app](/contrib/kube-prometheus/examples/example-app)
|
||||
deploy a fake service exposing Prometheus metrics. They additionally define a new Prometheus
|
||||
server and a [`ServiceMonitor`](https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md#servicemonitor),
|
||||
which specifies how the example service should be monitored.
|
||||
@@ -76,10 +121,13 @@ manage its life cycle.
|
||||
hack/example-service-monitoring/deploy
|
||||
```
|
||||
|
||||
After all pods are ready you can reach the Prometheus server on node port `30100` and observe
|
||||
how it monitors the service as specified. Same as before, this Prometheus server automatically
|
||||
discovers the Alertmanager cluster deployed in the [Monitoring Kubernetes](#Monitoring-Kubernetes)
|
||||
section.
|
||||
After all pods are ready you can reach the Prometheus server similar to the Prometheus server above:
|
||||
|
||||
```bash
|
||||
kubectl port-forward prometheus-frontend-0 9090
|
||||
```
|
||||
|
||||
Then you can access Prometheus through `http://localhost:9090/`.
|
||||
|
||||
Teardown:
|
||||
|
||||
|
6
example-dist/base/kube-prometheus.jsonnet
Normal file
6
example-dist/base/kube-prometheus.jsonnet
Normal file
@@ -0,0 +1,6 @@
|
||||
local kubePrometheus = import "kube-prometheus.libsonnet";
|
||||
|
||||
local namespace = "monitoring";
|
||||
local objects = kubePrometheus.new(namespace);
|
||||
|
||||
{[path]: std.manifestYamlDoc(objects[path]) for path in std.objectFields(objects)}
|
2
example-dist/bootkube/.gitignore
vendored
Normal file
2
example-dist/bootkube/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
tmp/
|
||||
manifests/
|
36
example-dist/bootkube/kube-prometheus.jsonnet
Normal file
36
example-dist/bootkube/kube-prometheus.jsonnet
Normal file
@@ -0,0 +1,36 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
local kubePrometheus = import "kube-prometheus.libsonnet";
|
||||
|
||||
local namespace = "monitoring";
|
||||
|
||||
local controllerManagerService = service.new("kube-controller-manager-prometheus-discovery", {"k8s-app": "kube-controller-manager"}, servicePort.newNamed("http-metrics", 10252, 10252)) +
|
||||
service.mixin.metadata.withNamespace("kube-system") +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-controller-manager"});
|
||||
|
||||
local schedulerService = service.new("kube-scheduler-prometheus-discovery", {"k8s-app": "kube-scheduler"}, servicePort.newNamed("http-metrics", 10251, 10251)) +
|
||||
service.mixin.metadata.withNamespace("kube-system") +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-scheduler"});
|
||||
|
||||
local kubeDNSService = service.new("kube-dns-prometheus-discovery", {"k8s-app": "kube-dns"}, [servicePort.newNamed("http-metrics-skydns", 10055, 10055), servicePort.newNamed("http-metrics-dnsmasq", 10054, 10054)]) +
|
||||
service.mixin.metadata.withNamespace("kube-system") +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-dns"});
|
||||
|
||||
local objects = kubePrometheus.new(namespace) +
|
||||
{
|
||||
"prometheus-k8s/prometheus-k8s-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("web", 9090, "web") + servicePort.withNodePort(30900)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"alertmanager-main/alertmanager-main-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("web", 9093, "web") + servicePort.withNodePort(30903)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"grafana/grafana-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("http", 3000, "http") + servicePort.withNodePort(30902)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"prometheus-k8s/kube-controller-manager-prometheus-discovery-service.yaml": controllerManagerService,
|
||||
"prometheus-k8s/kube-scheduler-prometheus-discovery-service.yaml": schedulerService,
|
||||
"prometheus-k8s/kube-dns-prometheus-discovery-service.yaml": kubeDNSService,
|
||||
};
|
||||
|
||||
{[path]: std.manifestYamlDoc(objects[path]) for path in std.objectFields(objects)}
|
2
example-dist/kubeadm/.gitignore
vendored
Normal file
2
example-dist/kubeadm/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
tmp/
|
||||
manifests/
|
31
example-dist/kubeadm/kube-prometheus.jsonnet
Normal file
31
example-dist/kubeadm/kube-prometheus.jsonnet
Normal file
@@ -0,0 +1,31 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
local kubePrometheus = import "kube-prometheus.libsonnet";
|
||||
|
||||
local namespace = "monitoring";
|
||||
|
||||
local controllerManagerService = service.new("kube-controller-manager-prometheus-discovery", {component: "kube-controller-manager"}, servicePort.newNamed("http-metrics", 10252, 10252)) +
|
||||
service.mixin.metadata.withNamespace("kube-system") +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-controller-manager"});
|
||||
|
||||
local schedulerService = service.new("kube-scheduler-prometheus-discovery", {component: "kube-scheduler"}, servicePort.newNamed("http-metrics", 10251, 10251)) +
|
||||
service.mixin.metadata.withNamespace("kube-system") +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-scheduler"});
|
||||
|
||||
local objects = kubePrometheus.new(namespace) +
|
||||
{
|
||||
"prometheus-k8s/prometheus-k8s-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("web", 9090, "web") + servicePort.withNodePort(30900)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"alertmanager-main/alertmanager-main-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("web", 9093, "web") + servicePort.withNodePort(30903)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"grafana/grafana-service.yaml"+:
|
||||
service.mixin.spec.withPorts(servicePort.newNamed("http", 3000, "http") + servicePort.withNodePort(30902)) +
|
||||
service.mixin.spec.withType("NodePort"),
|
||||
"prometheus-k8s/kube-controller-manager-prometheus-discovery-service.yaml": controllerManagerService,
|
||||
"prometheus-k8s/kube-scheduler-prometheus-discovery-service.yaml": schedulerService,
|
||||
};
|
||||
|
||||
{[path]: std.manifestYamlDoc(objects[path]) for path in std.objectFields(objects)}
|
@@ -1,40 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
if [ -z "${KUBECONFIG}" ]; then
|
||||
export KUBECONFIG=~/.kube/config
|
||||
fi
|
||||
manifest_prefix=${1-.}
|
||||
|
||||
# CAUTION - setting NAMESPACE will deploy most components to the given namespace
|
||||
# however some are hardcoded to 'monitoring'. Only use if you have reviewed all manifests.
|
||||
kubectl create namespace monitoring
|
||||
|
||||
if [ -z "${NAMESPACE}" ]; then
|
||||
NAMESPACE=monitoring
|
||||
fi
|
||||
|
||||
kubectl create namespace "$NAMESPACE"
|
||||
|
||||
kctl() {
|
||||
kubectl --namespace "$NAMESPACE" "$@"
|
||||
}
|
||||
|
||||
kctl apply -f manifests/prometheus-operator
|
||||
kubectl apply -f ${manifest_prefix}/manifests/prometheus-operator/
|
||||
|
||||
# Wait for CRDs to be ready.
|
||||
printf "Waiting for Operator to register custom resource definitions..."
|
||||
until kctl get customresourcedefinitions servicemonitors.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kctl get customresourcedefinitions prometheuses.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kctl get customresourcedefinitions alertmanagers.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kctl get servicemonitors.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kctl get prometheuses.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kctl get alertmanagers.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get customresourcedefinitions servicemonitors.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get customresourcedefinitions prometheuses.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get customresourcedefinitions alertmanagers.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get servicemonitors.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get prometheuses.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
until kubectl get alertmanagers.monitoring.coreos.com > /dev/null 2>&1; do sleep 1; printf "."; done
|
||||
echo "done!"
|
||||
|
||||
kctl apply -f manifests/node-exporter
|
||||
kctl apply -f manifests/kube-state-metrics
|
||||
kctl apply -f manifests/grafana/grafana-credentials.yaml
|
||||
kctl apply -f manifests/grafana
|
||||
find manifests/prometheus -type f ! -name prometheus-k8s-roles.yaml ! -name prometheus-k8s-role-bindings.yaml -exec kubectl --namespace "$NAMESPACE" apply -f {} \;
|
||||
kubectl apply -f manifests/prometheus/prometheus-k8s-roles.yaml
|
||||
kubectl apply -f manifests/prometheus/prometheus-k8s-role-bindings.yaml
|
||||
kctl apply -f manifests/alertmanager/
|
||||
kubectl apply -f ${manifest_prefix}/manifests/node-exporter/
|
||||
kubectl apply -f ${manifest_prefix}/manifests/kube-state-metrics/
|
||||
kubectl apply -f ${manifest_prefix}/manifests/grafana/
|
||||
kubectl apply -f ${manifest_prefix}/manifests/prometheus-k8s/
|
||||
kubectl apply -f ${manifest_prefix}/manifests/alertmanager-main/
|
||||
|
||||
|
@@ -1,17 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# We assume that the kubelet uses token authN and authZ, as otherwise
|
||||
# Prometheus needs a client certificate, which gives it full access to the
|
||||
# kubelet, rather than just the metrics. Token authN and authZ allows more fine
|
||||
# grained and easier access control. Simply start minikube with the following
|
||||
# command (you can of course adapt the version and memory to your needs):
|
||||
#
|
||||
# $ minikube delete && minikube start --kubernetes-version=v1.9.1 --memory=4096 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
|
||||
#
|
||||
# In future versions of minikube and kubeadm this will be the default, but for
|
||||
# the time being, we will have to configure it ourselves.
|
||||
|
||||
hack/cluster-monitoring/deploy
|
||||
|
||||
kubectl --namespace=kube-system apply -f manifests/k8s/kubeadm/
|
||||
|
@@ -1,6 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
hack/cluster-monitoring/teardown
|
||||
|
||||
kubectl --namespace=kube-system delete -f manifests/k8s/minikube
|
||||
|
@@ -1,6 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
hack/cluster-monitoring/deploy
|
||||
|
||||
kubectl apply -f manifests/k8s/self-hosted
|
||||
|
@@ -1,6 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
hack/cluster-monitoring/teardown
|
||||
|
||||
kubectl delete -f manifests/k8s/self-hosted
|
||||
|
@@ -1,30 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
if [ -z "${KUBECONFIG}" ]; then
|
||||
export KUBECONFIG=~/.kube/config
|
||||
fi
|
||||
|
||||
# CAUTION - NAMESPACE must match its value when deploy script was run.
|
||||
# Some resources are always deployed to the monitoring namespace.
|
||||
|
||||
if [ -z "${NAMESPACE}" ]; then
|
||||
NAMESPACE=monitoring
|
||||
fi
|
||||
|
||||
kctl() {
|
||||
kubectl --namespace "$NAMESPACE" "$@"
|
||||
}
|
||||
|
||||
kctl delete -f manifests/node-exporter
|
||||
kctl delete -f manifests/kube-state-metrics
|
||||
kctl delete -f manifests/grafana
|
||||
find manifests/prometheus -type f ! -name prometheus-k8s-roles.yaml ! -name prometheus-k8s-role-bindings.yaml -exec kubectl --namespace "$NAMESPACE" delete -f {} \;
|
||||
kubectl delete -f manifests/prometheus/prometheus-k8s-roles.yaml
|
||||
kubectl delete -f manifests/prometheus/prometheus-k8s-role-bindings.yaml
|
||||
kctl delete -f manifests/alertmanager
|
||||
|
||||
# Hack: wait a bit to let the controller delete the deployed Prometheus server.
|
||||
sleep 5
|
||||
|
||||
kctl delete -f manifests/prometheus-operator
|
||||
kubectl delete namespace monitoring
|
||||
|
||||
|
@@ -1,3 +1,3 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
kubectl apply -f manifests/examples/example-app
|
||||
kubectl apply -f examples/example-app
|
||||
|
@@ -1,3 +1,3 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
kubectl delete -f manifests/examples/example-app
|
||||
kubectl delete -f examples/example-app
|
||||
|
25
hack/scripts/build-jsonnet.sh
Executable file
25
hack/scripts/build-jsonnet.sh
Executable file
@@ -0,0 +1,25 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
set -x
|
||||
|
||||
jsonnet="${1-kube-prometheus.jsonnet}"
|
||||
prefix="${2-manifests}"
|
||||
json="tmp/manifests.json"
|
||||
|
||||
rm -rf ${prefix}
|
||||
mkdir -p $(dirname "${json}")
|
||||
jsonnet \
|
||||
-J $GOPATH/src/github.com/ksonnet/ksonnet-lib \
|
||||
-J $GOPATH/src/github.com/grafana/grafonnet-lib \
|
||||
-J $GOPATH/src/github.com/coreos/prometheus-operator/contrib/kube-prometheus/jsonnet \
|
||||
-J $GOPATH/src/github.com/brancz/kubernetes-grafana/src/kubernetes-jsonnet \
|
||||
${jsonnet} > ${json}
|
||||
|
||||
files=$(jq -r 'keys[]' ${json})
|
||||
|
||||
for file in ${files}; do
|
||||
dir=$(dirname "${file}")
|
||||
path="${prefix}/${dir}"
|
||||
mkdir -p ${path}
|
||||
jq -r ".[\"${file}\"]" ${json} | gojsontoyaml -yamltojson | gojsontoyaml > "${prefix}/${file}"
|
||||
done
|
@@ -1,11 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
cat <<-EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: alertmanager-main
|
||||
data:
|
||||
alertmanager.yaml: $(cat assets/alertmanager/alertmanager.yaml | base64 --wrap=0)
|
||||
EOF
|
||||
|
@@ -1,39 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set +x
|
||||
|
||||
cat <<-EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: grafana-dashboard-definitions-0
|
||||
data:
|
||||
EOF
|
||||
|
||||
for f in assets/grafana/generated/*-dashboard.json
|
||||
do
|
||||
rm -rf $f
|
||||
done
|
||||
|
||||
virtualenv -p python3 .env 2>&1 > /dev/null
|
||||
source .env/bin/activate 2>&1 > /dev/null
|
||||
pip install -Ur requirements.txt 2>&1 > /dev/null
|
||||
for f in assets/grafana/*.dashboard.py
|
||||
do
|
||||
basefilename=$(basename $f)
|
||||
JSON_FILENAME="assets/grafana/generated/${basefilename%%.*}-dashboard.json"
|
||||
generate-dashboard $f -o $JSON_FILENAME 2>&1 > /dev/null
|
||||
done
|
||||
|
||||
cp assets/grafana/raw-json-dashboards/*-dashboard.json assets/grafana/generated/
|
||||
|
||||
for f in assets/grafana/generated/*-dashboard.json
|
||||
do
|
||||
basefilename=$(basename $f)
|
||||
echo " $basefilename: |+"
|
||||
if [ "$basefilename" = "etcd-dashboard.json" ]; then
|
||||
hack/scripts/wrap-dashboard.sh $f prometheus-etcd | sed "s/^/ /g"
|
||||
else
|
||||
hack/scripts/wrap-dashboard.sh $f prometheus | sed "s/^/ /g"
|
||||
fi
|
||||
done
|
@@ -1,20 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$#" -ne 2 ]; then
|
||||
echo "Usage: $0 user password"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
user=$1
|
||||
password=$2
|
||||
|
||||
cat <<-EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: grafana-credentials
|
||||
data:
|
||||
user: $(echo -n ${user} | base64 --wrap=0)
|
||||
password: $(echo -n ${password} | base64 --wrap=0)
|
||||
EOF
|
||||
|
@@ -1,26 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set +x
|
||||
|
||||
# Generate Alert Rules ConfigMap
|
||||
hack/scripts/generate-rules-configmap.sh > manifests/prometheus/prometheus-k8s-rules.yaml
|
||||
|
||||
# Generate Dashboard ConfigMap
|
||||
hack/scripts/generate-dashboards-configmap.sh > manifests/grafana/grafana-dashboard-definitions.yaml
|
||||
|
||||
# Generate Dashboard ConfigMap with configmap-generator tool
|
||||
# Max Size per ConfigMap: 240000
|
||||
# Input dir: assets/grafana
|
||||
# output file: manifests/grafana/grafana-dashboards.yaml
|
||||
# grafana deployment output file: manifests/grafana/grafana-deployment.yaml
|
||||
test -f manifests/grafana/grafana-dashboard-definitions.yaml && rm -f manifests/grafana/grafana-dashboard-definitions.yaml
|
||||
test -f manifests/grafana/grafana-deployment.yaml && rm -f manifests/grafana/grafana-deployment.yaml
|
||||
test -f manifests/grafana/grafana-dashboards.yaml && rm -f manifests/grafana/grafana-dashboards.yaml
|
||||
hack/grafana-dashboards-configmap-generator/bin/grafana_dashboards_generate.sh -s 240000 -i assets/grafana/generated -o manifests/grafana/grafana-dashboard-definitions.yaml -g manifests/grafana/grafana-deployment.yaml -d manifests/grafana/grafana-dashboards.yaml
|
||||
|
||||
# Generate Grafana Credentials Secret
|
||||
hack/scripts/generate-grafana-credentials-secret.sh admin admin > manifests/grafana/grafana-credentials.yaml
|
||||
|
||||
# Generate Secret for Alertmanager config
|
||||
hack/scripts/generate-alertmanager-config-secret.sh > manifests/alertmanager/alertmanager-config.yaml
|
||||
|
@@ -1,18 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
cat <<-EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: prometheus-k8s-rules
|
||||
labels:
|
||||
role: alert-rules
|
||||
prometheus: k8s
|
||||
data:
|
||||
EOF
|
||||
|
||||
for f in assets/prometheus/rules/*.rules.y*ml
|
||||
do
|
||||
echo " $(basename "$f"): |+"
|
||||
cat $f | sed "s/^/ /g"
|
||||
done
|
@@ -1,51 +0,0 @@
|
||||
#!/bin/bash -eu
|
||||
|
||||
# Intended usage:
|
||||
# * Edit dashboard in Grafana (you need to login first with admin/admin
|
||||
# login/password).
|
||||
# * Save dashboard in Grafana to check is specification is correct.
|
||||
# Looks like this is the only way to check if dashboard specification
|
||||
# has errors.
|
||||
# * Download dashboard specification as JSON file in Grafana:
|
||||
# Share -> Export -> Save to file.
|
||||
# * Drop dashboard specification in assets folder:
|
||||
# mv Nodes-1488465802729.json assets/grafana/node-dashboard.json
|
||||
# * Regenerate Grafana configmap:
|
||||
# ./hack/scripts/generate-manifests.sh
|
||||
# * Apply new configmap:
|
||||
# kubectl -n monitoring apply -f manifests/grafana/grafana-cm.yaml
|
||||
|
||||
if [ "$#" -ne 2 ]; then
|
||||
echo "Usage: $0 path-to-dashboard.json grafana-prometheus-datasource-name"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
dashboardjson=$1
|
||||
datasource_name=$2
|
||||
inputname="DS_PROMETHEUS"
|
||||
|
||||
if [ "$datasource_name" = "prometheus-etcd" ]; then
|
||||
inputname="DS_PROMETHEUS-ETCD"
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"dashboard":
|
||||
EOF
|
||||
|
||||
cat $dashboardjson
|
||||
|
||||
cat <<EOF
|
||||
,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "$inputname",
|
||||
"pluginId": "prometheus",
|
||||
"type": "datasource",
|
||||
"value": "$datasource_name"
|
||||
}
|
||||
],
|
||||
"overwrite": true
|
||||
}
|
||||
EOF
|
||||
|
8
jsonnet/alertmanager/alertmanager-main-secret.libsonnet
Normal file
8
jsonnet/alertmanager/alertmanager-main-secret.libsonnet
Normal file
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local secret = k.core.v1.secret;
|
||||
|
||||
{
|
||||
new(namespace, plainConfig)::
|
||||
secret.new("alertmanager-main", {"alertmanager.yaml": std.base64(plainConfig)}) +
|
||||
secret.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local serviceAccount = k.core.v1.serviceAccount;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
serviceAccount.new("alertmanager-main") +
|
||||
serviceAccount.mixin.metadata.withNamespace(namespace)
|
||||
}
|
12
jsonnet/alertmanager/alertmanager-main-service.libsonnet
Normal file
12
jsonnet/alertmanager/alertmanager-main-service.libsonnet
Normal file
@@ -0,0 +1,12 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
|
||||
local alertmanagerPort = servicePort.newNamed("web", 9093, "web");
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
service.new("alertmanager-main", {app: "alertmanager", alertmanager: "main"}, alertmanagerPort) +
|
||||
service.mixin.metadata.withNamespace(namespace) +
|
||||
service.mixin.metadata.withLabels({alertmanager: "main"})
|
||||
}
|
19
jsonnet/alertmanager/alertmanager-main.libsonnet
Normal file
19
jsonnet/alertmanager/alertmanager-main.libsonnet
Normal file
@@ -0,0 +1,19 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
apiVersion: "monitoring.coreos.com/v1",
|
||||
kind: "Alertmanager",
|
||||
metadata: {
|
||||
name: "main",
|
||||
namespace: namespace,
|
||||
labels: {
|
||||
alertmanager: "main",
|
||||
},
|
||||
},
|
||||
spec: {
|
||||
replicas: 3,
|
||||
version: "v0.14.0",
|
||||
serviceAccountName: "alertmanager-main",
|
||||
},
|
||||
}
|
||||
}
|
6
jsonnet/alertmanager/alertmanager.libsonnet
Normal file
6
jsonnet/alertmanager/alertmanager.libsonnet
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
config:: import "alertmanager-main-secret.libsonnet",
|
||||
serviceAccount:: import "alertmanager-main-service-account.libsonnet",
|
||||
service:: import "alertmanager-main-service.libsonnet",
|
||||
alertmanager:: import "alertmanager-main.libsonnet",
|
||||
}
|
85
jsonnet/kube-prometheus.libsonnet
Normal file
85
jsonnet/kube-prometheus.libsonnet
Normal file
@@ -0,0 +1,85 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
|
||||
local alertmanager = import "alertmanager/alertmanager.libsonnet";
|
||||
local ksm = import "kube-state-metrics/kube-state-metrics.libsonnet";
|
||||
local nodeExporter = import "node-exporter/node-exporter.libsonnet";
|
||||
local po = import "prometheus-operator/prometheus-operator.libsonnet";
|
||||
local prometheus = import "prometheus/prometheus.libsonnet";
|
||||
local grafana = import "grafana/grafana.libsonnet";
|
||||
|
||||
local alertmanagerConfig = importstr "../assets/alertmanager/alertmanager.yaml";
|
||||
|
||||
local ruleFiles = {
|
||||
"alertmanager.rules.yaml": importstr "../assets/prometheus/rules/alertmanager.rules.yaml",
|
||||
"etcd3.rules.yaml": importstr "../assets/prometheus/rules/etcd3.rules.yaml",
|
||||
"general.rules.yaml": importstr "../assets/prometheus/rules/general.rules.yaml",
|
||||
"kube-controller-manager.rules.yaml": importstr "../assets/prometheus/rules/kube-controller-manager.rules.yaml",
|
||||
"kube-scheduler.rules.yaml": importstr "../assets/prometheus/rules/kube-scheduler.rules.yaml",
|
||||
"kube-state-metrics.rules.yaml": importstr "../assets/prometheus/rules/kube-state-metrics.rules.yaml",
|
||||
"kubelet.rules.yaml": importstr "../assets/prometheus/rules/kubelet.rules.yaml",
|
||||
"kubernetes.rules.yaml": importstr "../assets/prometheus/rules/kubernetes.rules.yaml",
|
||||
"node.rules.yaml": importstr "../assets/prometheus/rules/node.rules.yaml",
|
||||
"prometheus.rules.yaml": importstr "../assets/prometheus/rules/prometheus.rules.yaml",
|
||||
};
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"grafana/grafana-dashboard-definitions.yaml": grafana.dashboardDefinitions.new(namespace),
|
||||
"grafana/grafana-dashboard-sources.yaml": grafana.dashboardSources.new(namespace),
|
||||
"grafana/grafana-datasources.yaml": grafana.dashboardDatasources.new(namespace),
|
||||
"grafana/grafana-deployment.yaml": grafana.deployment.new(namespace),
|
||||
"grafana/grafana-service-account.yaml": grafana.serviceAccount.new(namespace),
|
||||
"grafana/grafana-service.yaml": grafana.service.new(namespace),
|
||||
|
||||
"alertmanager-main/alertmanager-main-secret.yaml": alertmanager.config.new(namespace, alertmanagerConfig),
|
||||
"alertmanager-main/alertmanager-main-service-account.yaml": alertmanager.serviceAccount.new(namespace),
|
||||
"alertmanager-main/alertmanager-main-service.yaml": alertmanager.service.new(namespace),
|
||||
"alertmanager-main/alertmanager-main.yaml": alertmanager.alertmanager.new(namespace),
|
||||
|
||||
"kube-state-metrics/kube-state-metrics-cluster-role-binding.yaml": ksm.clusterRoleBinding.new(namespace),
|
||||
"kube-state-metrics/kube-state-metrics-cluster-role.yaml": ksm.clusterRole.new(),
|
||||
"kube-state-metrics/kube-state-metrics-deployment.yaml": ksm.deployment.new(namespace),
|
||||
"kube-state-metrics/kube-state-metrics-role-binding.yaml": ksm.roleBinding.new(namespace),
|
||||
"kube-state-metrics/kube-state-metrics-role.yaml": ksm.role.new(namespace),
|
||||
"kube-state-metrics/kube-state-metrics-service-account.yaml": ksm.serviceAccount.new(namespace),
|
||||
"kube-state-metrics/kube-state-metrics-service.yaml": ksm.service.new(namespace),
|
||||
|
||||
"node-exporter/node-exporter-cluster-role-binding.yaml": nodeExporter.clusterRoleBinding.new(namespace),
|
||||
"node-exporter/node-exporter-cluster-role.yaml": nodeExporter.clusterRole.new(),
|
||||
"node-exporter/node-exporter-daemonset.yaml": nodeExporter.daemonset.new(namespace),
|
||||
"node-exporter/node-exporter-service-account.yaml": nodeExporter.serviceAccount.new(namespace),
|
||||
"node-exporter/node-exporter-service.yaml": nodeExporter.service.new(namespace),
|
||||
|
||||
"prometheus-operator/prometheus-operator-cluster-role-binding.yaml": po.clusterRoleBinding.new(namespace),
|
||||
"prometheus-operator/prometheus-operator-cluster-role.yaml": po.clusterRole.new(),
|
||||
"prometheus-operator/prometheus-operator-deployment.yaml": po.deployment.new(namespace),
|
||||
"prometheus-operator/prometheus-operator-service.yaml": po.service.new(namespace),
|
||||
"prometheus-operator/prometheus-operator-service-account.yaml": po.serviceAccount.new(namespace),
|
||||
|
||||
"prometheus-k8s/prometheus-k8s-cluster-role-binding.yaml": prometheus.clusterRoleBinding.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-cluster-role.yaml": prometheus.clusterRole.new(),
|
||||
"prometheus-k8s/prometheus-k8s-service-account.yaml": prometheus.serviceAccount.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service.yaml": prometheus.service.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s.yaml": prometheus.prometheus.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-rules.yaml": prometheus.rules.new(namespace, ruleFiles),
|
||||
"prometheus-k8s/prometheus-k8s-role-binding-config.yaml": prometheus.roleBindingConfig.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-binding-namespace.yaml": prometheus.roleBindingNamespace.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-binding-kube-system.yaml": prometheus.roleBindingKubeSystem.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-binding-default.yaml": prometheus.roleBindingDefault.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-config.yaml": prometheus.roleConfig.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-namespace.yaml": prometheus.roleNamespace.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-role-kube-system.yaml": prometheus.roleKubeSystem.new(),
|
||||
"prometheus-k8s/prometheus-k8s-role-default.yaml": prometheus.roleDefault.new(),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-alertmanager.yaml": prometheus.serviceMonitorAlertmanager.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-apiserver.yaml": prometheus.serviceMonitorApiserver.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-coredns.yaml": prometheus.serviceMonitorCoreDNS.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-kube-controller-manager.yaml": prometheus.serviceMonitorControllerManager.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-kube-scheduler.yaml": prometheus.serviceMonitorScheduler.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-kube-state-metrics.yaml": prometheus.serviceMonitorKubeStateMetrics.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-kubelet.yaml": prometheus.serviceMonitorKubelet.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-node-exporter.yaml": prometheus.serviceMonitorNodeExporter.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-prometheus-operator.yaml": prometheus.serviceMonitorPrometheusOperator.new(namespace),
|
||||
"prometheus-k8s/prometheus-k8s-service-monitor-prometheus.yaml": prometheus.serviceMonitorPrometheus.new(namespace),
|
||||
}
|
||||
}
|
@@ -0,0 +1,12 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRoleBinding = k.rbac.v1.clusterRoleBinding;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
clusterRoleBinding.new() +
|
||||
clusterRoleBinding.mixin.metadata.withName("kube-state-metrics") +
|
||||
clusterRoleBinding.mixin.roleRef.withApiGroup("rbac.authorization.k8s.io") +
|
||||
clusterRoleBinding.mixin.roleRef.withName("kube-state-metrics") +
|
||||
clusterRoleBinding.mixin.roleRef.mixinInstance({kind: "ClusterRole"}) +
|
||||
clusterRoleBinding.withSubjects([{kind: "ServiceAccount", name: "kube-state-metrics", namespace: namespace}])
|
||||
}
|
@@ -0,0 +1,75 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRole = k.rbac.v1.clusterRole;
|
||||
local policyRule = clusterRole.rulesType;
|
||||
|
||||
local coreRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"configmaps",
|
||||
"secrets",
|
||||
"nodes",
|
||||
"pods",
|
||||
"services",
|
||||
"resourcequotas",
|
||||
"replicationcontrollers",
|
||||
"limitranges",
|
||||
"persistentvolumeclaims",
|
||||
"persistentvolumes",
|
||||
"namespaces",
|
||||
"endpoints",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local extensionsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["extensions"]) +
|
||||
policyRule.withResources([
|
||||
"daemonsets",
|
||||
"deployments",
|
||||
"replicasets",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local appsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["apps"]) +
|
||||
policyRule.withResources([
|
||||
"statefulsets",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local batchRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["batch"]) +
|
||||
policyRule.withResources([
|
||||
"cronjobs",
|
||||
"jobs",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local autoscalingRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["autoscaling"]) +
|
||||
policyRule.withResources([
|
||||
"horizontalpodautoscalers",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local authenticationRole = policyRule.new() +
|
||||
policyRule.withApiGroups(["authentication.k8s.io"]) +
|
||||
policyRule.withResources([
|
||||
"tokenreviews",
|
||||
]) +
|
||||
policyRule.withVerbs(["create"]);
|
||||
|
||||
local authorizationRole = policyRule.new() +
|
||||
policyRule.withApiGroups(["authorization.k8s.io"]) +
|
||||
policyRule.withResources([
|
||||
"subjectaccessreviews",
|
||||
]) +
|
||||
policyRule.withVerbs(["create"]);
|
||||
|
||||
local rules = [coreRule, extensionsRule, appsRule, batchRule, autoscalingRule, authenticationRole, authorizationRole];
|
||||
|
||||
{
|
||||
new()::
|
||||
clusterRole.new() +
|
||||
clusterRole.mixin.metadata.withName("kube-state-metrics") +
|
||||
clusterRole.withRules(rules)
|
||||
}
|
@@ -0,0 +1,86 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local deployment = k.apps.v1beta2.deployment;
|
||||
|
||||
local deployment = k.apps.v1beta2.deployment;
|
||||
local container = k.apps.v1beta2.deployment.mixin.spec.template.spec.containersType;
|
||||
local volume = k.apps.v1beta2.deployment.mixin.spec.template.spec.volumesType;
|
||||
local containerPort = container.portsType;
|
||||
local containerVolumeMount = container.volumeMountsType;
|
||||
local podSelector = deployment.mixin.spec.template.spec.selectorType;
|
||||
|
||||
local kubeStateMetricsVersion = "v1.3.0";
|
||||
local kubeRbacProxyVersion = "v0.3.0";
|
||||
local addonResizerVersion = "1.0";
|
||||
local podLabels = {"app": "kube-state-metrics"};
|
||||
|
||||
local proxyClusterMetrics =
|
||||
container.new("kube-rbac-proxy-main", "quay.io/coreos/kube-rbac-proxy:" + kubeRbacProxyVersion) +
|
||||
container.withArgs([
|
||||
"--secure-listen-address=:8443",
|
||||
"--upstream=http://127.0.0.1:8081/",
|
||||
]) +
|
||||
container.withPorts(containerPort.newNamed("https-main", 8443)) +
|
||||
container.mixin.resources.withRequests({cpu: "10m", memory: "20Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "20m", memory: "40Mi"});
|
||||
|
||||
local proxySelfMetrics =
|
||||
container.new("kube-rbac-proxy-self", "quay.io/coreos/kube-rbac-proxy:" + kubeRbacProxyVersion) +
|
||||
container.withArgs([
|
||||
"--secure-listen-address=:9443",
|
||||
"--upstream=http://127.0.0.1:8082/",
|
||||
]) +
|
||||
container.withPorts(containerPort.newNamed("https-self", 9443)) +
|
||||
container.mixin.resources.withRequests({cpu: "10m", memory: "20Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "20m", memory: "40Mi"});
|
||||
|
||||
local kubeStateMetrics =
|
||||
container.new("kube-state-metrics", "quay.io/coreos/kube-state-metrics:" + kubeStateMetricsVersion) +
|
||||
container.withArgs([
|
||||
"--host=127.0.0.1",
|
||||
"--port=8081",
|
||||
"--telemetry-host=127.0.0.1",
|
||||
"--telemetry-port=8082",
|
||||
]) +
|
||||
container.mixin.resources.withRequests({cpu: "102m", memory: "180Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "102m", memory: "180Mi"});
|
||||
|
||||
local addonResizer =
|
||||
container.new("addon-resizer", "quay.io/coreos/addon-resizer:" + addonResizerVersion) +
|
||||
container.withCommand([
|
||||
"/pod_nanny",
|
||||
"--container=kube-state-metrics",
|
||||
"--cpu=100m",
|
||||
"--extra-cpu=2m",
|
||||
"--memory=150Mi",
|
||||
"--extra-memory=30Mi",
|
||||
"--threshold=5",
|
||||
"--deployment=kube-state-metrics",
|
||||
]) +
|
||||
container.withEnv([
|
||||
{
|
||||
name: "MY_POD_NAME",
|
||||
valueFrom: {
|
||||
fieldRef: {apiVersion: "v1", fieldPath: "metadata.name"}
|
||||
}
|
||||
}, {
|
||||
name: "MY_POD_NAMESPACE",
|
||||
valueFrom: {
|
||||
fieldRef: {apiVersion: "v1", fieldPath: "metadata.namespace"}
|
||||
}
|
||||
}
|
||||
]) +
|
||||
container.mixin.resources.withRequests({cpu: "10m", memory: "30Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "10m", memory: "30Mi"});
|
||||
|
||||
local c = [proxyClusterMetrics, proxySelfMetrics, kubeStateMetrics, addonResizer];
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
deployment.new("kube-state-metrics", 1, c, podLabels) +
|
||||
deployment.mixin.metadata.withNamespace(namespace) +
|
||||
deployment.mixin.metadata.withLabels(podLabels) +
|
||||
deployment.mixin.spec.selector.withMatchLabels(podLabels) +
|
||||
deployment.mixin.spec.template.spec.securityContext.withRunAsNonRoot(true) +
|
||||
deployment.mixin.spec.template.spec.securityContext.withRunAsUser(65534) +
|
||||
deployment.mixin.spec.template.spec.withServiceAccountName("kube-state-metrics")
|
||||
}
|
@@ -0,0 +1,13 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local roleBinding = k.rbac.v1.roleBinding;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
roleBinding.new() +
|
||||
roleBinding.mixin.metadata.withName("kube-state-metrics") +
|
||||
roleBinding.mixin.metadata.withNamespace(namespace) +
|
||||
roleBinding.mixin.roleRef.withApiGroup("rbac.authorization.k8s.io") +
|
||||
roleBinding.mixin.roleRef.withName("kube-state-metrics-addon-resizer") +
|
||||
roleBinding.mixin.roleRef.mixinInstance({kind: "Role"}) +
|
||||
roleBinding.withSubjects([{kind: "ServiceAccount", name: "kube-state-metrics"}])
|
||||
}
|
28
jsonnet/kube-state-metrics/kube-state-metrics-role.libsonnet
Normal file
28
jsonnet/kube-state-metrics/kube-state-metrics-role.libsonnet
Normal file
@@ -0,0 +1,28 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local role = k.rbac.v1.role;
|
||||
local policyRule = role.rulesType;
|
||||
|
||||
local coreRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"pods",
|
||||
]) +
|
||||
policyRule.withVerbs(["get"]);
|
||||
|
||||
local extensionsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["extensions"]) +
|
||||
policyRule.withResources([
|
||||
"deployments",
|
||||
]) +
|
||||
policyRule.withVerbs(["get", "update"]) +
|
||||
policyRule.withResourceNames(["kube-state-metrics"]);
|
||||
|
||||
local rules = [coreRule, extensionsRule];
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
role.new() +
|
||||
role.mixin.metadata.withName("kube-state-metrics") +
|
||||
role.mixin.metadata.withNamespace(namespace) +
|
||||
role.withRules(rules)
|
||||
}
|
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local serviceAccount = k.core.v1.serviceAccount;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
serviceAccount.new("kube-state-metrics") +
|
||||
serviceAccount.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,15 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
|
||||
local ksmDeployment = import "kube-state-metrics-deployment.libsonnet";
|
||||
|
||||
local ksmServicePortMain = servicePort.newNamed("https-main", 8443, "https-main");
|
||||
local ksmServicePortSelf = servicePort.newNamed("https-self", 9443, "https-self");
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
service.new("kube-state-metrics", ksmDeployment.new(namespace).spec.selector.matchLabels, [ksmServicePortMain, ksmServicePortSelf]) +
|
||||
service.mixin.metadata.withNamespace(namespace) +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "kube-state-metrics"})
|
||||
}
|
9
jsonnet/kube-state-metrics/kube-state-metrics.libsonnet
Normal file
9
jsonnet/kube-state-metrics/kube-state-metrics.libsonnet
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
clusterRoleBinding:: import "kube-state-metrics-cluster-role-binding.libsonnet",
|
||||
clusterRole:: import "kube-state-metrics-cluster-role.libsonnet",
|
||||
deployment:: import "kube-state-metrics-deployment.libsonnet",
|
||||
roleBinding:: import "kube-state-metrics-role-binding.libsonnet",
|
||||
role:: import "kube-state-metrics-role.libsonnet",
|
||||
serviceAccount:: import "kube-state-metrics-service-account.libsonnet",
|
||||
service:: import "kube-state-metrics-service.libsonnet",
|
||||
}
|
@@ -0,0 +1,12 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRoleBinding = k.rbac.v1.clusterRoleBinding;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
clusterRoleBinding.new() +
|
||||
clusterRoleBinding.mixin.metadata.withName("node-exporter") +
|
||||
clusterRoleBinding.mixin.roleRef.withApiGroup("rbac.authorization.k8s.io") +
|
||||
clusterRoleBinding.mixin.roleRef.withName("node-exporter") +
|
||||
clusterRoleBinding.mixin.roleRef.mixinInstance({kind: "ClusterRole"}) +
|
||||
clusterRoleBinding.withSubjects([{kind: "ServiceAccount", name: "node-exporter", namespace: namespace}])
|
||||
}
|
26
jsonnet/node-exporter/node-exporter-cluster-role.libsonnet
Normal file
26
jsonnet/node-exporter/node-exporter-cluster-role.libsonnet
Normal file
@@ -0,0 +1,26 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRole = k.rbac.v1.clusterRole;
|
||||
local policyRule = clusterRole.rulesType;
|
||||
|
||||
local authenticationRole = policyRule.new() +
|
||||
policyRule.withApiGroups(["authentication.k8s.io"]) +
|
||||
policyRule.withResources([
|
||||
"tokenreviews",
|
||||
]) +
|
||||
policyRule.withVerbs(["create"]);
|
||||
|
||||
local authorizationRole = policyRule.new() +
|
||||
policyRule.withApiGroups(["authorization.k8s.io"]) +
|
||||
policyRule.withResources([
|
||||
"subjectaccessreviews",
|
||||
]) +
|
||||
policyRule.withVerbs(["create"]);
|
||||
|
||||
local rules = [authenticationRole, authorizationRole];
|
||||
|
||||
{
|
||||
new()::
|
||||
clusterRole.new() +
|
||||
clusterRole.mixin.metadata.withName("node-exporter") +
|
||||
clusterRole.withRules(rules)
|
||||
}
|
58
jsonnet/node-exporter/node-exporter-daemonset.libsonnet
Normal file
58
jsonnet/node-exporter/node-exporter-daemonset.libsonnet
Normal file
@@ -0,0 +1,58 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
|
||||
local daemonset = k.apps.v1beta2.daemonSet;
|
||||
local container = daemonset.mixin.spec.template.spec.containersType;
|
||||
local volume = daemonset.mixin.spec.template.spec.volumesType;
|
||||
local containerPort = container.portsType;
|
||||
local containerVolumeMount = container.volumeMountsType;
|
||||
local podSelector = daemonset.mixin.spec.template.spec.selectorType;
|
||||
|
||||
local nodeExporterVersion = "v0.15.2";
|
||||
local kubeRbacProxyVersion = "v0.3.0";
|
||||
local podLabels = {"app": "node-exporter"};
|
||||
|
||||
local procVolumeName = "proc";
|
||||
local procVolume = volume.fromHostPath(procVolumeName, "/proc");
|
||||
local procVolumeMount = containerVolumeMount.new(procVolumeName, "/host/proc");
|
||||
|
||||
local sysVolumeName = "sys";
|
||||
local sysVolume = volume.fromHostPath(sysVolumeName, "/sys");
|
||||
local sysVolumeMount = containerVolumeMount.new(sysVolumeName, "/host/sys");
|
||||
|
||||
local nodeExporter =
|
||||
container.new("node-exporter", "quay.io/prometheus/node-exporter:" + nodeExporterVersion) +
|
||||
container.withArgs([
|
||||
"--web.listen-address=127.0.0.1:9101",
|
||||
"--path.procfs=/host/proc",
|
||||
"--path.sysfs=/host/sys",
|
||||
]) +
|
||||
container.withVolumeMounts([procVolumeMount, sysVolumeMount]) +
|
||||
container.mixin.resources.withRequests({cpu: "102m", memory: "180Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "102m", memory: "180Mi"});
|
||||
|
||||
local proxy =
|
||||
container.new("kube-rbac-proxy", "quay.io/coreos/kube-rbac-proxy:" + kubeRbacProxyVersion) +
|
||||
container.withArgs([
|
||||
"--secure-listen-address=:9100",
|
||||
"--upstream=http://127.0.0.1:9101/",
|
||||
]) +
|
||||
container.withPorts(containerPort.newNamed("https", 9100)) +
|
||||
container.mixin.resources.withRequests({cpu: "10m", memory: "20Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "20m", memory: "40Mi"});
|
||||
|
||||
local c = [nodeExporter, proxy];
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
daemonset.new() +
|
||||
daemonset.mixin.metadata.withName("node-exporter") +
|
||||
daemonset.mixin.metadata.withNamespace(namespace) +
|
||||
daemonset.mixin.metadata.withLabels(podLabels) +
|
||||
daemonset.mixin.spec.selector.withMatchLabels(podLabels) +
|
||||
daemonset.mixin.spec.template.metadata.withLabels(podLabels) +
|
||||
daemonset.mixin.spec.template.spec.withContainers(c) +
|
||||
daemonset.mixin.spec.template.spec.withVolumes([procVolume, sysVolume]) +
|
||||
daemonset.mixin.spec.template.spec.securityContext.withRunAsNonRoot(true) +
|
||||
daemonset.mixin.spec.template.spec.securityContext.withRunAsUser(65534) +
|
||||
daemonset.mixin.spec.template.spec.withServiceAccountName("node-exporter")
|
||||
}
|
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local serviceAccount = k.core.v1.serviceAccount;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
serviceAccount.new("node-exporter") +
|
||||
serviceAccount.mixin.metadata.withNamespace(namespace)
|
||||
}
|
14
jsonnet/node-exporter/node-exporter-service.libsonnet
Normal file
14
jsonnet/node-exporter/node-exporter-service.libsonnet
Normal file
@@ -0,0 +1,14 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
|
||||
local nodeExporterDaemonset = import "node-exporter-daemonset.libsonnet";
|
||||
|
||||
local nodeExporterPort = servicePort.newNamed("https", 9100, "https");
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
service.new("node-exporter", nodeExporterDaemonset.new(namespace).spec.selector.matchLabels, nodeExporterPort) +
|
||||
service.mixin.metadata.withNamespace(namespace) +
|
||||
service.mixin.metadata.withLabels({"k8s-app": "node-exporter"})
|
||||
}
|
7
jsonnet/node-exporter/node-exporter.libsonnet
Normal file
7
jsonnet/node-exporter/node-exporter.libsonnet
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
clusterRoleBinding:: import "node-exporter-cluster-role-binding.libsonnet",
|
||||
clusterRole:: import "node-exporter-cluster-role.libsonnet",
|
||||
daemonset:: import "node-exporter-daemonset.libsonnet",
|
||||
serviceAccount:: import "node-exporter-service-account.libsonnet",
|
||||
service:: import "node-exporter-service.libsonnet",
|
||||
}
|
@@ -0,0 +1,12 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRoleBinding = k.rbac.v1.clusterRoleBinding;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
clusterRoleBinding.new() +
|
||||
clusterRoleBinding.mixin.metadata.withName("prometheus-operator") +
|
||||
clusterRoleBinding.mixin.roleRef.withApiGroup("rbac.authorization.k8s.io") +
|
||||
clusterRoleBinding.mixin.roleRef.withName("prometheus-operator") +
|
||||
clusterRoleBinding.mixin.roleRef.mixinInstance({kind: "ClusterRole"}) +
|
||||
clusterRoleBinding.withSubjects([{kind: "ServiceAccount", name: "prometheus-operator", namespace: namespace}])
|
||||
}
|
@@ -0,0 +1,80 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRole = k.rbac.v1.clusterRole;
|
||||
local policyRule = clusterRole.rulesType;
|
||||
|
||||
local extensionsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["extensions"]) +
|
||||
policyRule.withResources([
|
||||
"thirdpartyresources",
|
||||
]) +
|
||||
policyRule.withVerbs(["*"]);
|
||||
|
||||
local apiExtensionsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["apiextensions.k8s.io"]) +
|
||||
policyRule.withResources([
|
||||
"customresourcedefinitions",
|
||||
]) +
|
||||
policyRule.withVerbs(["*"]);
|
||||
|
||||
local monitoringRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["monitoring.coreos.com"]) +
|
||||
policyRule.withResources([
|
||||
"alertmanagers",
|
||||
"prometheuses",
|
||||
"prometheuses/finalizers",
|
||||
"alertmanagers/finalizers",
|
||||
"servicemonitors",
|
||||
]) +
|
||||
policyRule.withVerbs(["*"]);
|
||||
|
||||
local appsRule = policyRule.new() +
|
||||
policyRule.withApiGroups(["apps"]) +
|
||||
policyRule.withResources([
|
||||
"statefulsets",
|
||||
]) +
|
||||
policyRule.withVerbs(["*"]);
|
||||
|
||||
local coreRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"configmaps",
|
||||
"secrets",
|
||||
]) +
|
||||
policyRule.withVerbs(["*"]);
|
||||
|
||||
local podRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"pods",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "delete"]);
|
||||
|
||||
local routingRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"services",
|
||||
]) +
|
||||
policyRule.withVerbs(["get", "create", "update"]);
|
||||
|
||||
local nodeRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"nodes",
|
||||
]) +
|
||||
policyRule.withVerbs(["list", "watch"]);
|
||||
|
||||
local namespaceRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"namespaces",
|
||||
]) +
|
||||
policyRule.withVerbs(["list"]);
|
||||
|
||||
local rules = [extensionsRule, apiExtensionsRule, monitoringRule, appsRule, coreRule, podRule, routingRule, nodeRule, namespaceRule];
|
||||
|
||||
{
|
||||
new()::
|
||||
clusterRole.new() +
|
||||
clusterRole.mixin.metadata.withName("prometheus-operator") +
|
||||
clusterRole.withRules(rules)
|
||||
}
|
@@ -0,0 +1,30 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local rawVersion = importstr "../../../../VERSION";
|
||||
|
||||
local removeLineBreaks = function(str) std.join("", std.filter(function(c) c != "\n", std.stringChars(str)));
|
||||
local version = "v0.18.1";//removeLineBreaks(rawVersion);
|
||||
|
||||
local deployment = k.apps.v1beta2.deployment;
|
||||
local container = k.apps.v1beta2.deployment.mixin.spec.template.spec.containersType;
|
||||
local containerPort = container.portsType;
|
||||
|
||||
local targetPort = 8080;
|
||||
local podLabels = {"k8s-app": "prometheus-operator"};
|
||||
|
||||
local operatorContainer =
|
||||
container.new("prometheus-operator", "quay.io/coreos/prometheus-operator:" + version) +
|
||||
container.withPorts(containerPort.newNamed("http", targetPort)) +
|
||||
container.withArgs(["--kubelet-service=kube-system/kubelet", "--config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1"]) +
|
||||
container.mixin.resources.withRequests({cpu: "100m", memory: "50Mi"}) +
|
||||
container.mixin.resources.withLimits({cpu: "200m", memory: "100Mi"});
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
deployment.new("prometheus-operator", 1, operatorContainer, podLabels) +
|
||||
deployment.mixin.metadata.withNamespace(namespace) +
|
||||
deployment.mixin.metadata.withLabels(podLabels) +
|
||||
deployment.mixin.spec.selector.withMatchLabels(podLabels) +
|
||||
deployment.mixin.spec.template.spec.securityContext.withRunAsNonRoot(true) +
|
||||
deployment.mixin.spec.template.spec.securityContext.withRunAsUser(65534) +
|
||||
deployment.mixin.spec.template.spec.withServiceAccountName("prometheus-operator")
|
||||
}
|
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local serviceAccount = k.core.v1.serviceAccount;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
serviceAccount.new("prometheus-operator") +
|
||||
serviceAccount.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,14 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local service = k.core.v1.service;
|
||||
local servicePort = k.core.v1.service.mixin.spec.portsType;
|
||||
|
||||
local poDeployment = import "prometheus-operator-deployment.libsonnet";
|
||||
|
||||
local poServicePort = servicePort.newNamed("http", 8080, "http");
|
||||
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
service.new("prometheus-operator", poDeployment.new(namespace).spec.selector.matchLabels, [poServicePort]) +
|
||||
service.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,7 @@
|
||||
{
|
||||
clusterRoleBinding:: import "prometheus-operator-cluster-role-binding.libsonnet",
|
||||
clusterRole:: import "prometheus-operator-cluster-role.libsonnet",
|
||||
deployment:: import "prometheus-operator-deployment.libsonnet",
|
||||
serviceAccount:: import "prometheus-operator-service-account.libsonnet",
|
||||
service:: import "prometheus-operator-service.libsonnet",
|
||||
}
|
@@ -0,0 +1,12 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRoleBinding = k.rbac.v1.clusterRoleBinding;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
clusterRoleBinding.new() +
|
||||
clusterRoleBinding.mixin.metadata.withName("prometheus-k8s") +
|
||||
clusterRoleBinding.mixin.roleRef.withApiGroup("rbac.authorization.k8s.io") +
|
||||
clusterRoleBinding.mixin.roleRef.withName("prometheus-k8s") +
|
||||
clusterRoleBinding.mixin.roleRef.mixinInstance({kind: "ClusterRole"}) +
|
||||
clusterRoleBinding.withSubjects([{kind: "ServiceAccount", name: "prometheus-k8s", namespace: namespace}])
|
||||
}
|
21
jsonnet/prometheus/prometheus-k8s-cluster-role.libsonnet
Normal file
21
jsonnet/prometheus/prometheus-k8s-cluster-role.libsonnet
Normal file
@@ -0,0 +1,21 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local clusterRole = k.rbac.v1.clusterRole;
|
||||
local policyRule = clusterRole.rulesType;
|
||||
|
||||
local nodeMetricsRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources(["nodes/metrics"]) +
|
||||
policyRule.withVerbs(["get"]);
|
||||
|
||||
local metricsRule = policyRule.new() +
|
||||
policyRule.withNonResourceUrls("/metrics") +
|
||||
policyRule.withVerbs(["get"]);
|
||||
|
||||
local rules = [nodeMetricsRule, metricsRule];
|
||||
|
||||
{
|
||||
new()::
|
||||
clusterRole.new() +
|
||||
clusterRole.mixin.metadata.withName("prometheus-k8s") +
|
||||
clusterRole.withRules(rules)
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRoleBinding = import "prometheus-namespace-role-binding.libsonnet";
|
||||
|
||||
{
|
||||
new(namespace):: prometheusNamespaceRoleBinding.new(namespace, namespace, "prometheus-k8s-config")
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRoleBinding = import "prometheus-namespace-role-binding.libsonnet";
|
||||
|
||||
{
|
||||
new(namespace):: prometheusNamespaceRoleBinding.new(namespace, "default", "prometheus-k8s")
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRoleBinding = import "prometheus-namespace-role-binding.libsonnet";
|
||||
|
||||
{
|
||||
new(namespace):: prometheusNamespaceRoleBinding.new(namespace, "kube-system", "prometheus-k8s")
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRoleBinding = import "prometheus-namespace-role-binding.libsonnet";
|
||||
|
||||
{
|
||||
new(namespace):: prometheusNamespaceRoleBinding.new(namespace, namespace, "prometheus-k8s")
|
||||
}
|
18
jsonnet/prometheus/prometheus-k8s-role-config.libsonnet
Normal file
18
jsonnet/prometheus/prometheus-k8s-role-config.libsonnet
Normal file
@@ -0,0 +1,18 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local role = k.rbac.v1.role;
|
||||
local policyRule = role.rulesType;
|
||||
|
||||
local configmapRule = policyRule.new() +
|
||||
policyRule.withApiGroups([""]) +
|
||||
policyRule.withResources([
|
||||
"configmaps",
|
||||
]) +
|
||||
policyRule.withVerbs(["get"]);
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
role.new() +
|
||||
role.mixin.metadata.withName("prometheus-k8s-config") +
|
||||
role.mixin.metadata.withNamespace(namespace) +
|
||||
role.withRules(configmapRule),
|
||||
}
|
5
jsonnet/prometheus/prometheus-k8s-role-default.libsonnet
Normal file
5
jsonnet/prometheus/prometheus-k8s-role-default.libsonnet
Normal file
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRole = import "prometheus-namespace-role.libsonnet";
|
||||
|
||||
{
|
||||
new():: prometheusNamespaceRole.new("default")
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRole = import "prometheus-namespace-role.libsonnet";
|
||||
|
||||
{
|
||||
new():: prometheusNamespaceRole.new("kube-system")
|
||||
}
|
@@ -0,0 +1,5 @@
|
||||
local prometheusNamespaceRole = import "prometheus-namespace-role.libsonnet";
|
||||
|
||||
{
|
||||
new(namespace):: prometheusNamespaceRole.new(namespace)
|
||||
}
|
8
jsonnet/prometheus/prometheus-k8s-rules.libsonnet
Normal file
8
jsonnet/prometheus/prometheus-k8s-rules.libsonnet
Normal file
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local configMap = k.core.v1.configMap;
|
||||
|
||||
{
|
||||
new(namespace, ruleFiles)::
|
||||
configMap.new("prometheus-k8s-rules", ruleFiles) +
|
||||
configMap.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,8 @@
|
||||
local k = import "ksonnet.beta.3/k.libsonnet";
|
||||
local serviceAccount = k.core.v1.serviceAccount;
|
||||
|
||||
{
|
||||
new(namespace)::
|
||||
serviceAccount.new("prometheus-k8s") +
|
||||
serviceAccount.mixin.metadata.withNamespace(namespace)
|
||||
}
|
@@ -0,0 +1,32 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"apiVersion": "monitoring.coreos.com/v1",
|
||||
"kind": "ServiceMonitor",
|
||||
"metadata": {
|
||||
"name": "alertmanager",
|
||||
"namespace": namespace,
|
||||
"labels": {
|
||||
"k8s-app": "alertmanager"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"alertmanager": "main"
|
||||
}
|
||||
},
|
||||
"namespaceSelector": {
|
||||
"matchNames": [
|
||||
"monitoring"
|
||||
]
|
||||
},
|
||||
"endpoints": [
|
||||
{
|
||||
"port": "web",
|
||||
"interval": "30s"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,40 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"apiVersion": "monitoring.coreos.com/v1",
|
||||
"kind": "ServiceMonitor",
|
||||
"metadata": {
|
||||
"name": "kube-apiserver",
|
||||
"namespace": namespace,
|
||||
"labels": {
|
||||
"k8s-app": "apiserver"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"jobLabel": "component",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"component": "apiserver",
|
||||
"provider": "kubernetes"
|
||||
}
|
||||
},
|
||||
"namespaceSelector": {
|
||||
"matchNames": [
|
||||
"default"
|
||||
]
|
||||
},
|
||||
"endpoints": [
|
||||
{
|
||||
"port": "https",
|
||||
"interval": "30s",
|
||||
"scheme": "https",
|
||||
"tlsConfig": {
|
||||
"caFile": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
|
||||
"serverName": "kubernetes"
|
||||
},
|
||||
"bearerTokenFile": "/var/run/secrets/kubernetes.io/serviceaccount/token"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,35 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"apiVersion": "monitoring.coreos.com/v1",
|
||||
"kind": "ServiceMonitor",
|
||||
"metadata": {
|
||||
"name": "coredns",
|
||||
"namespace": namespace,
|
||||
"labels": {
|
||||
"k8s-app": "coredns"
|
||||
},
|
||||
},
|
||||
"spec": {
|
||||
"jobLabel": "k8s-app",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"k8s-app": "coredns",
|
||||
"component": "metrics"
|
||||
}
|
||||
},
|
||||
"namespaceSelector": {
|
||||
"matchNames": [
|
||||
"kube-system"
|
||||
]
|
||||
},
|
||||
"endpoints": [
|
||||
{
|
||||
"port": "http-metrics",
|
||||
"interval": "15s",
|
||||
"bearerTokenFile": "/var/run/secrets/kubernetes.io/serviceaccount/token"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,33 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"apiVersion": "monitoring.coreos.com/v1",
|
||||
"kind": "ServiceMonitor",
|
||||
"metadata": {
|
||||
"name": "kube-controller-manager",
|
||||
"namespace": namespace,
|
||||
"labels": {
|
||||
"k8s-app": "kube-controller-manager"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"jobLabel": "k8s-app",
|
||||
"endpoints": [
|
||||
{
|
||||
"port": "http-metrics",
|
||||
"interval": "30s"
|
||||
}
|
||||
],
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"k8s-app": "kube-controller-manager"
|
||||
}
|
||||
},
|
||||
"namespaceSelector": {
|
||||
"matchNames": [
|
||||
"kube-system"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -0,0 +1,33 @@
|
||||
{
|
||||
new(namespace)::
|
||||
{
|
||||
"apiVersion": "monitoring.coreos.com/v1",
|
||||
"kind": "ServiceMonitor",
|
||||
"metadata": {
|
||||
"name": "kube-scheduler",
|
||||
"namespace": namespace,
|
||||
"labels": {
|
||||
"k8s-app": "kube-scheduler"
|
||||
}
|
||||
},
|
||||
"spec": {
|
||||
"jobLabel": "k8s-app",
|
||||
"endpoints": [
|
||||
{
|
||||
"port": "http-metrics",
|
||||
"interval": "30s"
|
||||
}
|
||||
],
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"k8s-app": "kube-scheduler"
|
||||
}
|
||||
},
|
||||
"namespaceSelector": {
|
||||
"matchNames": [
|
||||
"kube-system"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user