Merge pull request #5 from brancz/prerequisites-docs

add explanation and guide on prerequisites
This commit is contained in:
Fabian Reinartz
2016-11-02 07:15:13 -07:00
committed by GitHub

View File

@@ -6,18 +6,62 @@ monitoring setup working.
## Prerequisites ## Prerequisites
First, you need a running Kubernetes cluster. If you don't have one, follow First, you need a running Kubernetes cluster. If you don't have one, follow the
the instructions of [bootkube](https://github.com/kubernetes-incubator/bootkube) instructions of [bootkube](https://github.com/kubernetes-incubator/bootkube) or
or [minikube](https://github.com/kubernetes/minikube). [minikube](https://github.com/kubernetes/minikube). Some sample contents of this
repository are adapted to work with a [multi-node setup](https://github.com/kubernetes-incubator/bootkube/tree/master/hack/multi-node)
using [bootkube](https://github.com/kubernetes-incubator/bootkube).
etcd is an important component of a working Kubernetes cluster, but it's deployed Prometheus discovers targets via Kubernetes endpoints objects, which are automatically
outside of it. The monitoring setup below assumes that it is made visible from populated by Kubernetes services. Therefore Prometheus can
within the cluster through a headless Kubernetes service. automatically find and pick up all services within a cluster. By
An example for bootkube's multi-vagrant setup is [here](/manifests/etcd/etcd-bootkube-vagrant-multi.yaml). default there is a service for the Kubernetes API server. For other Kubernetes
core components to be monitored, headless services must be setup for them to be
discovered by Prometheus as they may be deployed differently depending
on the cluster.
For the `kube-scheduler` and `kube-controller-manager` there are headless
services prepared, simply add them to your running cluster:
```bash
kubectl -n kube-system create manifests/k8s/
```
> Hint: if you use this for a cluster not created with bootkube, make sure you
> populate an endpoints object with the address to your `kube-scheduler` and
> `kube-controller-manager`, or adapt the label selectors to match your setup.
Aside from Kubernetes specific components, etcd is an important part of a
working cluster, but is typically deployed outside of it. This monitoring
setup assumes that it is made visible from within the cluster through a headless
service as well.
An example for bootkube's multi-node vagrant setup is [here](/manifests/etcd/etcd-bootkube-vagrant-multi.yaml).
> Hint: this is merely an example for a local setup. The addresses will have to
> be adapted for a setup, that is not a single etcd bootkube created cluster.
Before you continue, you should have endpoints objects for:
* `apiserver` (called `kubernetes` here)
* `kube-controller-manager`
* `kube-scheduler`
* `etcd` (called `etcd-k8s` to make clear this is the etcd used by kubernetes)
For example:
```bash
$ kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 172.17.4.101:443 2h
kube-system kube-controller-manager-prometheus-discovery 10.2.30.2:10252 1h
kube-system kube-scheduler-prometheus-discovery 10.2.30.4:10251 1h
monitoring etcd-k8s 172.17.4.51:2379 1h
```
## Monitoring Kubernetes ## Monitoring Kubernetes
The manifests used here use the [Prometheus controller](https://github.com/coreos/kube-prometheus-controller), The manifests used here use the [Prometheus Operator](https://github.com/coreos/prometheus-operator),
which manages Prometheus servers and their configuration in your cluster. To install the which manages Prometheus servers and their configuration in your cluster. To install the
controller, the [node_exporter](https://github.com/prometheus/node_exporter), controller, the [node_exporter](https://github.com/prometheus/node_exporter),
[Grafana](https://grafana.org) including default dashboards, and the Prometheus server, run: [Grafana](https://grafana.org) including default dashboards, and the Prometheus server, run:
@@ -38,9 +82,9 @@ To tear it all down again, run:
hack/cluster-monitoring/teardown hack/cluster-monitoring/teardown
``` ```
*All services in the manifest still contain the `prometheus.io/scrape = true` annotations. It is not > All services in the manifest still contain the `prometheus.io/scrape = true`
used by the Prometheus controller. They remain for convential deployments as in > annotations. It is not used by the Prometheus controller. They remain for
[this example configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml).* > pre Prometheus v1.3.0 deployments as in [this example configuration](https://github.com/prometheus/prometheus/blob/6703404cb431f57ca4c5097bc2762438d3c1968e/documentation/examples/prometheus-kubernetes.yml).
## Monitoring custom services ## Monitoring custom services
@@ -82,8 +126,6 @@ Grafana data sources.
* Incorporate [Alertmanager controller](https://github.com/coreos/kube-alertmanager-controller) * Incorporate [Alertmanager controller](https://github.com/coreos/kube-alertmanager-controller)
* Grafana controller that dynamically discovers and deploys dashboards from ConfigMaps * Grafana controller that dynamically discovers and deploys dashboards from ConfigMaps
* Collection of base alerting for cluster monitoring
* KPM/Helm packages to easily provide production-ready cluster-monitoring setup (essentially contents of `hack/cluster-monitoring`) * KPM/Helm packages to easily provide production-ready cluster-monitoring setup (essentially contents of `hack/cluster-monitoring`)
* Add meta-monitoring to default cluster monitoring setup * Add meta-monitoring to default cluster monitoring setup