Compare commits
20 Commits
v0.10.0
...
release-0.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
344ec3464f | ||
|
|
096039236a | ||
|
|
94e8af35bb | ||
|
|
6dfe0e2ed6 | ||
|
|
63a13ae16b | ||
|
|
e7eff18e7e | ||
|
|
ad6e0c2770 | ||
|
|
b76224662e | ||
|
|
508722d5db | ||
|
|
a38f7012a9 | ||
|
|
26c8329481 | ||
|
|
3fa00f11f3 | ||
|
|
b38868e361 | ||
|
|
c4e43dc412 | ||
|
|
b54ad2ea71 | ||
|
|
125fb56d74 | ||
|
|
35a61d9a0e | ||
|
|
5b9aa36169 | ||
|
|
142434ca2b | ||
|
|
701e3c91eb |
2
.github/workflows/ci.yaml
vendored
2
.github/workflows/ci.yaml
vendored
@@ -3,7 +3,7 @@ on:
|
|||||||
- push
|
- push
|
||||||
- pull_request
|
- pull_request
|
||||||
env:
|
env:
|
||||||
golang-version: '1.15'
|
golang-version: '1.18'
|
||||||
kind-version: 'v0.11.1'
|
kind-version: 'v0.11.1'
|
||||||
jobs:
|
jobs:
|
||||||
generate:
|
generate:
|
||||||
|
|||||||
@@ -7,3 +7,7 @@ validators:
|
|||||||
# Ignore release links.
|
# Ignore release links.
|
||||||
- regex: 'https:\/\/github\.com\/prometheus-operator\/kube-prometheus\/releases'
|
- regex: 'https:\/\/github\.com\/prometheus-operator\/kube-prometheus\/releases'
|
||||||
type: "ignore"
|
type: "ignore"
|
||||||
|
# the www.weave.works domain returns 404 for many pages.
|
||||||
|
# Ignoring for now but we need remove the related content if it persists.
|
||||||
|
- regex: 'https:\/\/www.weave.works.*'
|
||||||
|
type: "ignore"
|
||||||
|
|||||||
@@ -14,11 +14,13 @@ date: "2021-03-08T23:04:32+01:00"
|
|||||||
|
|
||||||
`kube-prometheus` ships with a set of default [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [Grafana](http://grafana.com/) dashboards. At some point one might like to extend them, the purpose of this document is to explain how to do this.
|
`kube-prometheus` ships with a set of default [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [Grafana](http://grafana.com/) dashboards. At some point one might like to extend them, the purpose of this document is to explain how to do this.
|
||||||
|
|
||||||
All manifests of kube-prometheus are generated using [jsonnet](https://jsonnet.org/) and Prometheus rules and Grafana dashboards in specific follow the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/).
|
All manifests of kube-prometheus are generated using [jsonnet](https://jsonnet.org/).
|
||||||
|
Prometheus rules and Grafana dashboards in specific follow the
|
||||||
|
[Prometheus Monitoring Mixins proposal](https://github.com/monitoring-mixins/docs/blob/master/design.pdf).
|
||||||
|
|
||||||
For both the Prometheus rules and the Grafana dashboards Kubernetes `ConfigMap`s are generated within kube-prometheus. In order to add additional rules and dashboards simply merge them onto the existing json objects. This document illustrates examples for rules as well as dashboards.
|
For both the Prometheus rules and the Grafana dashboards Kubernetes `ConfigMap`s are generated within kube-prometheus. In order to add additional rules and dashboards simply merge them onto the existing json objects. This document illustrates examples for rules as well as dashboards.
|
||||||
|
|
||||||
As a basis, all examples in this guide are based on the base example of the kube-prometheus [readme](../../README.md):
|
As a basis, all examples in this guide are based on the base example of the kube-prometheus [readme](https://github.com/prometheus-operator/kube-prometheus/blob/main/README.md):
|
||||||
|
|
||||||
```jsonnet mdox-exec="cat example.jsonnet"
|
```jsonnet mdox-exec="cat example.jsonnet"
|
||||||
local kp =
|
local kp =
|
||||||
@@ -61,11 +63,14 @@ local kp =
|
|||||||
|
|
||||||
### Alerting rules
|
### Alerting rules
|
||||||
|
|
||||||
According to the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/) Prometheus alerting rules are under the key `prometheusAlerts` in the top level object, so in order to add an additional alerting rule, we can simply merge an extra rule into the existing object.
|
As per the [Prometheus Monitoring Mixins proposal](https://github.com/monitoring-mixins/docs/blob/master/design.pdf)
|
||||||
|
Prometheus alerting rules are under the key `prometheusAlerts` in the top level object.
|
||||||
|
Additional alerting rules can be added by merging into the existing object.
|
||||||
|
|
||||||
The format is exactly the Prometheus format, so there should be no changes necessary should you have existing rules that you want to include.
|
The format is exactly the Prometheus format, so there should be no changes necessary should you have existing rules that you want to include.
|
||||||
|
|
||||||
> Note that alerts can just as well be included into this file, using the jsonnet `import` function. In this example it is just inlined in order to demonstrate their use in a single file.
|
> Note that alerts can also be included into this file, using the jsonnet `import` function.
|
||||||
|
> In this example it is just inlined in order to demonstrate their use in a single file.
|
||||||
|
|
||||||
```jsonnet mdox-exec="cat examples/prometheus-additional-alert-rule-example.jsonnet"
|
```jsonnet mdox-exec="cat examples/prometheus-additional-alert-rule-example.jsonnet"
|
||||||
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
local kp = (import 'kube-prometheus/main.libsonnet') + {
|
||||||
@@ -336,9 +341,14 @@ Dashboards can either be added using jsonnet or simply a pre-rendered json dashb
|
|||||||
|
|
||||||
### Jsonnet dashboard
|
### Jsonnet dashboard
|
||||||
|
|
||||||
We recommend using the [grafonnet](https://github.com/grafana/grafonnet-lib/) library for jsonnet, which gives you a simple DSL to generate Grafana dashboards. Following the [Prometheus Monitoring Mixins proposal](https://docs.google.com/document/d/1A9xvzwqnFVSOZ5fD3blKODXfsat5fg6ZhnKu9LK3lB4/) additional dashboards are added to the `grafanaDashboards` key, located in the top level object. To add new jsonnet dashboards, simply add one.
|
We recommend using the [grafonnet](https://github.com/grafana/grafonnet-lib/) library for jsonnet,
|
||||||
|
which gives you a simple DSL to generate Grafana dashboards.
|
||||||
|
Following the [Prometheus Monitoring Mixins proposal](https://github.com/monitoring-mixins/docs/blob/master/design.pdf)
|
||||||
|
additional dashboards are added to the `grafanaDashboards` key, located in the top level object.
|
||||||
|
To add new jsonnet dashboards, simply add one.
|
||||||
|
|
||||||
> Note that dashboards can just as well be included into this file, using the jsonnet `import` function. In this example it is just inlined in order to demonstrate their use in a single file.
|
> Note that dashboards can just as well be included into this file, using the jsonnet `import` function.
|
||||||
|
> In this example it is just inlined in order to demonstrate their use in a single file.
|
||||||
|
|
||||||
```jsonnet mdox-exec="cat examples/grafana-additional-jsonnet-dashboard-example.jsonnet"
|
```jsonnet mdox-exec="cat examples/grafana-additional-jsonnet-dashboard-example.jsonnet"
|
||||||
local grafana = import 'grafonnet/grafana.libsonnet';
|
local grafana = import 'grafonnet/grafana.libsonnet';
|
||||||
|
|||||||
@@ -37,6 +37,14 @@ function(params) {
|
|||||||
|
|
||||||
mixin:: (import 'github.com/kubernetes-monitoring/kubernetes-mixin/mixin.libsonnet') {
|
mixin:: (import 'github.com/kubernetes-monitoring/kubernetes-mixin/mixin.libsonnet') {
|
||||||
_config+:: k8s._config.mixin._config,
|
_config+:: k8s._config.mixin._config,
|
||||||
|
} + {
|
||||||
|
// Filter-out alerts related to kube-proxy when `kubeProxy: false`
|
||||||
|
[if !(defaults + params).kubeProxy then 'prometheusAlerts']+:: {
|
||||||
|
groups: std.filter(
|
||||||
|
function(g) !std.member(['kubernetes-system-kube-proxy'], g.name),
|
||||||
|
super.groups
|
||||||
|
),
|
||||||
|
},
|
||||||
},
|
},
|
||||||
|
|
||||||
prometheusRule: {
|
prometheusRule: {
|
||||||
@@ -280,7 +288,6 @@ function(params) {
|
|||||||
},
|
},
|
||||||
podMetricsEndpoints: [{
|
podMetricsEndpoints: [{
|
||||||
honorLabels: true,
|
honorLabels: true,
|
||||||
targetPort: 10249,
|
|
||||||
relabelings: [
|
relabelings: [
|
||||||
{
|
{
|
||||||
action: 'replace',
|
action: 'replace',
|
||||||
@@ -289,6 +296,13 @@ function(params) {
|
|||||||
sourceLabels: ['__meta_kubernetes_pod_node_name'],
|
sourceLabels: ['__meta_kubernetes_pod_node_name'],
|
||||||
targetLabel: 'instance',
|
targetLabel: 'instance',
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
action: 'replace',
|
||||||
|
regex: '(.*)',
|
||||||
|
replacement: '$1:10249',
|
||||||
|
targetLabel: '__address__',
|
||||||
|
sourceLabels: ['__meta_kubernetes_pod_ip'],
|
||||||
|
},
|
||||||
],
|
],
|
||||||
}],
|
}],
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -35,9 +35,12 @@ local defaults = {
|
|||||||
// GC values,
|
// GC values,
|
||||||
// imageGCLowThresholdPercent: 80
|
// imageGCLowThresholdPercent: 80
|
||||||
// imageGCHighThresholdPercent: 85
|
// imageGCHighThresholdPercent: 85
|
||||||
|
// GC kicks in when imageGCHighThresholdPercent is hit and attempts to free upto imageGCLowThresholdPercent.
|
||||||
// See https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/ for more details.
|
// See https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/ for more details.
|
||||||
fsSpaceFillingUpWarningThreshold: 20,
|
// Warn only after imageGCHighThresholdPercent is hit, but filesystem is not freed up for a prolonged duration.
|
||||||
fsSpaceFillingUpCriticalThreshold: 15,
|
fsSpaceFillingUpWarningThreshold: 15,
|
||||||
|
// Send critical alert only after (imageGCHighThresholdPercent + 5) is hit, but filesystem is not freed up for a prolonged duration.
|
||||||
|
fsSpaceFillingUpCriticalThreshold: 10,
|
||||||
diskDeviceSelector: 'device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"',
|
diskDeviceSelector: 'device=~"mmcblk.p.+|nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+"',
|
||||||
runbookURLPattern: 'https://runbooks.prometheus-operator.dev/runbooks/node/%s',
|
runbookURLPattern: 'https://runbooks.prometheus-operator.dev/runbooks/node/%s',
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -220,7 +220,27 @@ function(params) {
|
|||||||
'--tls-cipher-suites=' + std.join(',', pa._config.tlsCipherSuites),
|
'--tls-cipher-suites=' + std.join(',', pa._config.tlsCipherSuites),
|
||||||
],
|
],
|
||||||
resources: pa._config.resources,
|
resources: pa._config.resources,
|
||||||
ports: [{ containerPort: 6443 }],
|
readinessProbe: {
|
||||||
|
httpGet: {
|
||||||
|
path: '/readyz',
|
||||||
|
port: 'https',
|
||||||
|
scheme: 'HTTPS',
|
||||||
|
},
|
||||||
|
initialDelaySeconds: 30,
|
||||||
|
periodSeconds: 5,
|
||||||
|
failureThreshold: 5,
|
||||||
|
},
|
||||||
|
livenessProbe: {
|
||||||
|
httpGet: {
|
||||||
|
path: '/livez',
|
||||||
|
port: 'https',
|
||||||
|
scheme: 'HTTPS',
|
||||||
|
},
|
||||||
|
initialDelaySeconds: 30,
|
||||||
|
periodSeconds: 5,
|
||||||
|
failureThreshold: 5,
|
||||||
|
},
|
||||||
|
ports: [{ containerPort: 6443, name: 'https' }],
|
||||||
volumeMounts: [
|
volumeMounts: [
|
||||||
{ name: 'tmpfs', mountPath: '/tmp', readOnly: false },
|
{ name: 'tmpfs', mountPath: '/tmp', readOnly: false },
|
||||||
{ name: 'volume-serving-cert', mountPath: '/var/run/serving-cert', readOnly: false },
|
{ name: 'volume-serving-cert', mountPath: '/var/run/serving-cert', readOnly: false },
|
||||||
|
|||||||
@@ -1,6 +1,15 @@
|
|||||||
{
|
{
|
||||||
"version": 1,
|
"version": 1,
|
||||||
"dependencies": [
|
"dependencies": [
|
||||||
|
{
|
||||||
|
"source": {
|
||||||
|
"git": {
|
||||||
|
"remote": "https://github.com/grafana/jsonnet-libs.git",
|
||||||
|
"subdir": "mixin-utils"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"version": "master"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"source": {
|
"source": {
|
||||||
"local": {
|
"local": {
|
||||||
|
|||||||
@@ -18,7 +18,7 @@
|
|||||||
"subdir": "contrib/mixin"
|
"subdir": "contrib/mixin"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "73080a716634f45d50d0593e0454ed3206a52f5b",
|
"version": "ae3b43a924c688f06560ada76a047d14b3935829",
|
||||||
"sum": "W/Azptf1PoqjyMwJON96UY69MFugDA4IAYiKURscryc="
|
"sum": "W/Azptf1PoqjyMwJON96UY69MFugDA4IAYiKURscryc="
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -28,8 +28,8 @@
|
|||||||
"subdir": "grafonnet"
|
"subdir": "grafonnet"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "3626fc4dc2326931c530861ac5bebe39444f6cbf",
|
"version": "a1d61cce1da59c71409b99b5c7568511fec661ea",
|
||||||
"sum": "gF8foHByYcB25jcUOBqP6jxk0OPifQMjPvKY0HaCk6w="
|
"sum": "342u++/7rViR/zj2jeJOjshzglkZ1SY+hFNuyCBFMdc="
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"source": {
|
"source": {
|
||||||
@@ -38,8 +38,18 @@
|
|||||||
"subdir": "grafana-builder"
|
"subdir": "grafana-builder"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "264a5c2078c5930af57fe2d107eff83ab63553af",
|
"version": "02db06f540086fa3f67d487bd01e1b314853fb8f",
|
||||||
"sum": "0KkygBQd/AFzUvVzezE4qF/uDYgrwUXVpZfINBti0oc="
|
"sum": "B49EzIY2WZsFxNMJcgRxE/gcZ9ltnS8pkOOV6Q5qioc="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"source": {
|
||||||
|
"git": {
|
||||||
|
"remote": "https://github.com/grafana/jsonnet-libs.git",
|
||||||
|
"subdir": "mixin-utils"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"version": "d9ba581fb27aa6689e911f288d4df06948eb8aad",
|
||||||
|
"sum": "LoYq5QxJmUXEtqkEG8CFUBLBhhzDDaNANHc7Gz36ZdM="
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"source": {
|
"source": {
|
||||||
@@ -48,8 +58,8 @@
|
|||||||
"subdir": ""
|
"subdir": ""
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "b538a10c89508f8d12885680cca72a134d3127f5",
|
"version": "ab104c5c406b91078d676475c14ab18644f84f2d",
|
||||||
"sum": "GLt5T2k4RKg36Gfcaf9qlTfVumDitqotVD0ipz/bPJ4="
|
"sum": "tRpIInEClWUNe5IS6uIjucFN/KqDFgg19+yo78VrLfU="
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"source": {
|
"source": {
|
||||||
@@ -58,7 +68,7 @@
|
|||||||
"subdir": "lib/promgrafonnet"
|
"subdir": "lib/promgrafonnet"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "fd913499e956da06f520c3784c59573ee552b152",
|
"version": "c72ac0392998343d53bd73343467f8bf2aa4e333",
|
||||||
"sum": "zv7hXGui6BfHzE9wPatHI/AGZa4A2WKo6pq7ZdqBsps="
|
"sum": "zv7hXGui6BfHzE9wPatHI/AGZa4A2WKo6pq7ZdqBsps="
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -141,7 +151,7 @@
|
|||||||
"subdir": "mixin"
|
"subdir": "mixin"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"version": "632032712f12eea0015aaef24ee1e14f38ef3e55",
|
"version": "fb97c9a5ef51849ccb7960abbeb9581ad7f511b9",
|
||||||
"sum": "X+060DnePPeN/87fgj0SrfxVitywTk8hZA9V4nHxl1g=",
|
"sum": "X+060DnePPeN/87fgj0SrfxVitywTk8hZA9V4nHxl1g=",
|
||||||
"name": "thanos-mixin"
|
"name": "thanos-mixin"
|
||||||
},
|
},
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -150,7 +150,7 @@ spec:
|
|||||||
!=
|
!=
|
||||||
0
|
0
|
||||||
) or (
|
) or (
|
||||||
kube_daemonset_updated_number_scheduled{job="kube-state-metrics"}
|
kube_daemonset_status_updated_number_scheduled{job="kube-state-metrics"}
|
||||||
!=
|
!=
|
||||||
kube_daemonset_status_desired_number_scheduled{job="kube-state-metrics"}
|
kube_daemonset_status_desired_number_scheduled{job="kube-state-metrics"}
|
||||||
) or (
|
) or (
|
||||||
@@ -159,7 +159,7 @@ spec:
|
|||||||
kube_daemonset_status_desired_number_scheduled{job="kube-state-metrics"}
|
kube_daemonset_status_desired_number_scheduled{job="kube-state-metrics"}
|
||||||
)
|
)
|
||||||
) and (
|
) and (
|
||||||
changes(kube_daemonset_updated_number_scheduled{job="kube-state-metrics"}[5m])
|
changes(kube_daemonset_status_updated_number_scheduled{job="kube-state-metrics"}[5m])
|
||||||
==
|
==
|
||||||
0
|
0
|
||||||
)
|
)
|
||||||
@@ -752,18 +752,6 @@ spec:
|
|||||||
for: 15m
|
for: 15m
|
||||||
labels:
|
labels:
|
||||||
severity: critical
|
severity: critical
|
||||||
- name: kubernetes-system-kube-proxy
|
|
||||||
rules:
|
|
||||||
- alert: KubeProxyDown
|
|
||||||
annotations:
|
|
||||||
description: KubeProxy has disappeared from Prometheus target discovery.
|
|
||||||
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown
|
|
||||||
summary: Target disappeared from Prometheus target discovery.
|
|
||||||
expr: |
|
|
||||||
absent(up{job="kube-proxy"} == 1)
|
|
||||||
for: 15m
|
|
||||||
labels:
|
|
||||||
severity: critical
|
|
||||||
- name: kube-apiserver-burnrate.rules
|
- name: kube-apiserver-burnrate.rules
|
||||||
rules:
|
rules:
|
||||||
- expr: |
|
- expr: |
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ spec:
|
|||||||
summary: Filesystem is predicted to run out of space within the next 24 hours.
|
summary: Filesystem is predicted to run out of space within the next 24 hours.
|
||||||
expr: |
|
expr: |
|
||||||
(
|
(
|
||||||
node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} * 100 < 20
|
node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} * 100 < 15
|
||||||
and
|
and
|
||||||
predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 24*60*60) < 0
|
predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 24*60*60) < 0
|
||||||
and
|
and
|
||||||
@@ -41,7 +41,7 @@ spec:
|
|||||||
summary: Filesystem is predicted to run out of space within the next 4 hours.
|
summary: Filesystem is predicted to run out of space within the next 4 hours.
|
||||||
expr: |
|
expr: |
|
||||||
(
|
(
|
||||||
node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} * 100 < 15
|
node_filesystem_avail_bytes{job="node-exporter",fstype!=""} / node_filesystem_size_bytes{job="node-exporter",fstype!=""} * 100 < 10
|
||||||
and
|
and
|
||||||
predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 4*60*60) < 0
|
predict_linear(node_filesystem_avail_bytes{job="node-exporter",fstype!=""}[6h], 4*60*60) < 0
|
||||||
and
|
and
|
||||||
|
|||||||
@@ -37,9 +37,26 @@ spec:
|
|||||||
- --secure-port=6443
|
- --secure-port=6443
|
||||||
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
|
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
|
||||||
image: k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
|
image: k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
|
||||||
|
livenessProbe:
|
||||||
|
failureThreshold: 5
|
||||||
|
httpGet:
|
||||||
|
path: /livez
|
||||||
|
port: https
|
||||||
|
scheme: HTTPS
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
periodSeconds: 5
|
||||||
name: prometheus-adapter
|
name: prometheus-adapter
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 6443
|
- containerPort: 6443
|
||||||
|
name: https
|
||||||
|
readinessProbe:
|
||||||
|
failureThreshold: 5
|
||||||
|
httpGet:
|
||||||
|
path: /readyz
|
||||||
|
port: https
|
||||||
|
scheme: HTTPS
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
periodSeconds: 5
|
||||||
resources:
|
resources:
|
||||||
limits:
|
limits:
|
||||||
cpu: 250m
|
cpu: 250m
|
||||||
|
|||||||
Reference in New Issue
Block a user