kube-prometheus: update etcd info

Resolves issue #1629 in this repository.
This commit is contained in:
Joshua Olson
2018-08-01 09:17:58 -05:00
parent fe923a7239
commit bed6e4865a
5 changed files with 36 additions and 150 deletions

View File

@@ -343,35 +343,9 @@ In the above example the configuration has been inlined, but can just as well be
``` ```
### Static etcd configuration ### Static etcd configuration
In order to configure a static etcd cluster to scrape there is a simple mixin prepared, so only the IPs and certificate information need to be configured. Simply append the `kube-prometheus/kube-prometheus-static-etcd.libsonnet` mixin to the rest of the configuration, and configure the `ips` to be the IPs to scrape, and the `clientCA`, `clientKey` and `clientCert` to values that are valid to scrape etcd metrics with. In order to configure a static etcd cluster to scrape there is a simple [kube-prometheus-static-etcd.libsonnet](jsonnet/kube-prometheus/kube-prometheus-static-etcd.libsonnet) mixin prepared - see [etcd.jsonnet](examples/etcd.jsonnet) for an example of how to use that mixin, and [Monitoring external etcd](docs/monitoring-external-etcd.md) for more information.
Most likely these certificates are generated somewhere in an infrastructure repository, so using the jsonnet `importstr` function can be useful here. All the sensitive information on the certificates will end up in a Kubernetes Secret. > Note that monitoring etcd in minikube is currently not possible because of how etcd is setup. (minikube's etcd binds to 127.0.0.1:2379 only, and within host networking namespace.)
[embedmd]:# (examples/etcd.jsonnet)
```jsonnet
local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') + {
_config+:: {
namespace: 'monitoring',
etcd+:: {
ips: ['127.0.0.1'],
clientCA: importstr 'etcd-client-ca.crt',
clientKey: importstr 'etcd-client.key',
clientCert: importstr 'etcd-client.crt',
serverName: 'etcd.my-cluster.local',
},
},
};
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
```
### Customizing Prometheus alerting/recording rules and Grafana dashboards ### Customizing Prometheus alerting/recording rules and Grafana dashboards
@@ -385,8 +359,6 @@ See [exposing Prometheus/Alertmanager/Grafana](docs/exposing-prometheus-alertman
To use an easy to reproduce example, see [minikube.jsonnet](examples/minikube.jsonnet), which uses the minikube setup as demonstrated in [Prerequisites](#prerequisites). Because we would like easy access to our Prometheus, Alertmanager and Grafana UIs, `minikube.jsonnet` exposes the services as NodePort type services. To use an easy to reproduce example, see [minikube.jsonnet](examples/minikube.jsonnet), which uses the minikube setup as demonstrated in [Prerequisites](#prerequisites). Because we would like easy access to our Prometheus, Alertmanager and Grafana UIs, `minikube.jsonnet` exposes the services as NodePort type services.
> Note that NodePort type services is likely not a good idea for your production use case, it is only used for demonstration purposes here.
## Troubleshooting ## Troubleshooting
### Error retrieving kubelet metrics ### Error retrieving kubelet metrics

View File

@@ -2,119 +2,11 @@
This guide will help you monitor an external etcd cluster. When the etcd cluster is not hosted inside Kubernetes. This guide will help you monitor an external etcd cluster. When the etcd cluster is not hosted inside Kubernetes.
This is often the case with Kubernetes setups. This approach has been tested with kube-aws but the same principals apply to other tools. This is often the case with Kubernetes setups. This approach has been tested with kube-aws but the same principals apply to other tools.
# Step 1 - Make the etcd certificates available to Prometheus pod Note that [etcd.jsonnet](../examples/etcd.jsonnet) & [kube-prometheus-static-etcd.libsonnet](../jsonnet/kube-prometheus/kube-prometheus-static-etcd.libsonnet) (which are described by a section of the [Readme](../README.md#static-etcd-configuration)) do the following:
Prometheus Operator (and Prometheus) allow us to specify a tlsConfig. This is required as most likely your etcd metrics end points is secure. * Put the three etcd TLS client files (CA & cert & key) into a secret in the namespace, and have Prometheus Operator load the secret.
* Create the following (to expose etcd metrics - port 2379): a Service, Endpoint, & ServiceMonitor.
## a - Create the secrets in the namespace # Step 1: Open the port
Prometheus Operator allows us to mount secrets in the pod. By loading the secrets as files, they can be made available inside the Prometheus pod.
`kubectl -n monitoring create secret generic etcd-certs --from-file=CREDENTIAL_PATH/etcd-client.pem --from-file=CREDENTIAL_PATH/etcd-client-key.pem --from-file=CREDENTIAL_PATH/ca.pem`
where CREDENTIAL_PATH is the path to your etcd client credentials on your work machine.
(Kube-aws stores them inside the credential folder).
## b - Get Prometheus Operator to load the secret
In the previous step we have named the secret 'etcd-certs'.
Edit prometheus-operator/contrib/kube-prometheus/manifests/prometheus/prometheus-k8s.yaml and add the secret under the spec of the Prometheus object manifest:
```
secrets:
- etcd-certs
```
The manifest will look like that:
```
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
secrets:
- etcd-certs
version: v1.7.1
```
If your Prometheus Operator is already in place, update it:
`kubectl -n monitoring replace -f contrib/kube-prometheus/manifests/prometheus/prometheus-k8s.yaml
# Step 2 - Create the Service, endpoints and ServiceMonitor
The below manifest creates a Service to expose etcd metrics (port 2379)
* Replace `IP_OF_YOUR_ETCD_NODE_[0/1/2]` with the IP addresses of your etcd nodes. If you have more than one node, add them to the same list.
* Use `#insecureSkipVerify: true` or replace `ETCD_DNS_OR_ALTERNAME_NAME` with a valid name for the certificate.
In case you have generated the etcd certificated with kube-aws, you will need to use insecureSkipVerify as the valid certificate domain will be different for each etcd node (etcd0, etcd1, etcd2). If you only have one etcd node, you can use the value from `etcd.internalDomainName` speficied in your kube-aws `cluster.yaml`
In this example we use insecureSkipVerify: true as kube-aws default certificates are not valid against the IP. They were created for the DNS. Depending on your use case, you might want to remove this flag or set it to false. (true required for kube-aws if using default certificate generators method)
```
apiVersion: v1
kind: Service
metadata:
name: etcd-k8s
labels:
k8s-app: etcd
spec:
type: ClusterIP
clusterIP: None
ports:
- name: api
port: 2379
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: etcd-k8s
labels:
k8s-app: etcd
subsets:
- addresses:
- ip: IP_OF_YOUR_ETCD_NODE_0
nodeName: etcd0
- ip: IP_OF_YOUR_ETCD_NODE_1
nodeName: etcd1
- ip: IP_OF_YOUR_ETCD_NODE_2
nodeName: etcd2
ports:
- name: api
port: 2379
protocol: TCP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: etcd-k8s
labels:
k8s-app: etcd-k8s
spec:
jobLabel: k8s-app
endpoints:
- port: api
interval: 30s
scheme: https
tlsConfig:
caFile: /etc/prometheus/secrets/etcd-certs/ca.pem
certFile: /etc/prometheus/secrets/etcd-certs/etcd-client.pem
keyFile: /etc/prometheus/secrets/etcd-certs/etcd-client-key.pem
#use insecureSkipVerify only if you cannot use a Subject Alternative Name
#insecureSkipVerify: true
serverName: ETCD_DNS_OR_ALTERNAME_NAME
selector:
matchLabels:
k8s-app: etcd
namespaceSelector:
matchNames:
- monitoring
```
# Step 3: Open the port
You now need to allow the nodes Prometheus are running on to talk to the etcd on the port 2379 (if 2379 is the port used by etcd to expose the metrics) You now need to allow the nodes Prometheus are running on to talk to the etcd on the port 2379 (if 2379 is the port used by etcd to expose the metrics)
@@ -128,11 +20,11 @@ With kube-aws, each etcd node has two IP addresses:
For some reason, some etcd node answer to :2379/metrics on the intance IP (eth0), some others on the EIP|ENI address (eth1). See issue https://github.com/kubernetes-incubator/kube-aws/issues/923 For some reason, some etcd node answer to :2379/metrics on the intance IP (eth0), some others on the EIP|ENI address (eth1). See issue https://github.com/kubernetes-incubator/kube-aws/issues/923
It would be of course much better if we could hit the EPI/ENI all the time as they don't change even if the underlying EC2 intance goes down. It would be of course much better if we could hit the EPI/ENI all the time as they don't change even if the underlying EC2 intance goes down.
If specifying the Instance IP (eth0) in the Prometheus Operator ServiceMonitor, and the EC2 intance goes down, one would have to update the ServiceMonitor. If specifying the Instance IP (eth0) in the Prometheus Operator ServiceMonitor, and the EC2 intance goes down, one would have to update the ServiceMonitor.
Another idea woud be to use the DNS entries of etcd, but those are not currently supported for EndPoints objects in Kubernetes. Another idea woud be to use the DNS entries of etcd, but those are not currently supported for EndPoints objects in Kubernetes.
# Step 4: verify # Step 2: verify
Go to the Prometheus UI on :9090/config and check that you have an etcd job entry: Go to the Prometheus UI on :9090/config and check that you have an etcd job entry:
``` ```
@@ -142,9 +34,11 @@ Go to the Prometheus UI on :9090/config and check that you have an etcd job entr
... ...
``` ```
On the :9090/targets page, you should see "etcd" with the UP state. If not, check the Error column for more information. On the :9090/targets page:
* You should see "etcd" with the UP state. If not, check the Error column for more information.
* If no "etcd" targets are even shown on this page, prometheus isn't attempting to scrape it.
# Step 5: Grafana dashboard # Step 3: Grafana dashboard
## Find a dashboard you like ## Find a dashboard you like

View File

@@ -3,12 +3,29 @@ local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') +
_config+:: { _config+:: {
namespace: 'monitoring', namespace: 'monitoring',
// Reference info: https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md#static-etcd-configuration
etcd+:: { etcd+:: {
// Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commans to separate multiple values).
ips: ['127.0.0.1'], ips: ['127.0.0.1'],
clientCA: importstr 'etcd-client-ca.crt',
clientKey: importstr 'etcd-client.key', // Set these three variables to values that are valid to scrape etcd metrics with (check the apiserver container).
clientCert: importstr 'etcd-client.crt', // Most likely these certificates are generated somewhere in an infrastructure repository, so using the jsonnet `importstr` function can
// be useful here. (Kube-aws stores these three files inside the credential folder.)
// All the sensitive information on the certificates will end up in a Kubernetes Secret.
clientCA: importstr '/path-on-your-work-machine/etcd-client-ca.crt',
clientKey: importstr '/path-on-your-work-machine/etcd-client.key',
clientCert: importstr '/path-on-your-work-machine/etcd-client.crt',
// A valid name for the certificate
serverName: 'etcd.my-cluster.local', serverName: 'etcd.my-cluster.local',
// TODO: enhance kube-prometheus-static-etcd.libsonnet to allow 'insecureSkipVerify: true' to be specified here (as an alternative to specifying a value for 'serverName').
// Note that insecureSkipVerify is only to be used if you cannot use a Subject Alternative Name.
// In case you have generated the etcd certificate with kube-aws:
// * If you only have one etcd node, you can use the value from 'etcd.internalDomainName' (specified in your kube-aws cluster.yaml) as the value for 'serverName'.
// * But if you have multiple etcd nodes, you will need to use 'insecureSkipVerify: true' (if using default certificate generators method), as the valid certificate domain
// will be different for each etcd node. (kube-aws default certificates are not valid against the IP - they were created for the DNS.)
}, },
}, },
}; };

View File

@@ -1,6 +1,7 @@
local kp = local kp =
(import 'kube-prometheus/kube-prometheus.libsonnet') + (import 'kube-prometheus/kube-prometheus.libsonnet') +
(import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet') + (import 'kube-prometheus/kube-prometheus-kubeadm.libsonnet') +
// Note that NodePort type services is likely not a good idea for your production use case, it is only used for demonstration purposes here.
(import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') + (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
{ {
_config+:: { _config+:: {

View File

@@ -61,6 +61,7 @@ local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
port: 'metrics', port: 'metrics',
interval: '30s', interval: '30s',
scheme: 'https', scheme: 'https',
// Prometheus Operator (and Prometheus) allow us to specify a tlsConfig. This is required as most likely your etcd metrics end points is secure.
tlsConfig: { tlsConfig: {
caFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client-ca.crt', caFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client-ca.crt',
keyFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.key', keyFile: '/etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.key',
@@ -77,8 +78,8 @@ local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
}, },
}, },
secretEtcdCerts: secretEtcdCerts:
// Prometheus Operator allows us to mount secrets in the pod. By loading the secrets as files, they can be made available inside the Prometheus pod.
local secret = k.core.v1.secret; local secret = k.core.v1.secret;
secret.new('kube-etcd-client-certs', { secret.new('kube-etcd-client-certs', {
'etcd-client-ca.crt': std.base64($._config.etcd.clientCA), 'etcd-client-ca.crt': std.base64($._config.etcd.clientCA),
'etcd-client.key': std.base64($._config.etcd.clientKey), 'etcd-client.key': std.base64($._config.etcd.clientKey),
@@ -87,6 +88,7 @@ local k = import 'ksonnet/ksonnet.beta.3/k.libsonnet';
secret.mixin.metadata.withNamespace($._config.namespace), secret.mixin.metadata.withNamespace($._config.namespace),
prometheus+: prometheus+:
{ {
// Reference info: https://coreos.com/operators/prometheus/docs/latest/api.html#prometheusspec
spec+: { spec+: {
secrets+: [$.prometheus.secretEtcdCerts.metadata.name], secrets+: [$.prometheus.secretEtcdCerts.metadata.name],
}, },