Better ephemeral workspace support with Codespaces

Signed-off-by: GitHub <noreply@github.com>
This commit is contained in:
Arthur Silva Sens
2021-09-30 20:53:15 +00:00
committed by GitHub
parent b65faa6a55
commit 6239bc017a
11 changed files with 64 additions and 5 deletions

View File

@@ -0,0 +1,29 @@
# Ephemeral developer workspaces
Aiming to provide better developer experience when making contributions to kube-prometheus, whether by actively developing new features/bug fixes or by reviewing pull requests, we want to provide ephemeral developer workspaces with everything already configured (as far as tooling makes it possible).
Those developer workspaces should provide a brand new kubernetes cluster, where kube-prometheus can be easily deployed and the contributor can easily see the impact that a pull request is proposing.
Today there is 2 providers in the market:
* [Github Codespaces](https://github.com/features/codespaces)
* [Gitpod](https://www.gitpod.io/)
## Codespaces
Unfortunately, Codespaces is not available for everyone. If you are fortunate to have access to it, you can open a new workspace from a specific branch, or even from Pull Requests.
![image](https://user-images.githubusercontent.com/24193764/135522435-44b177b4-00d4-4863-b45b-2db47c8c70d0.png)
![image](https://user-images.githubusercontent.com/24193764/135522560-c64968ab-3b4e-4639-893a-c4d0a14421aa.png)
After your workspace start, you can deploy a kube-prometheus inside a Kind cluster inside by running `make deploy`.
If you are reviewing a PR, you'll have a fully-functional kubernetes cluster, generating real monitoring data that can be used to review if the proposed changes works as described.
If you are working on new features/bug fixes, you can regenerate kube-prometheus's YAML manifests with `make generate` and deploy it again with `make deploy`.
## Gitpod
Gitpod is already available to everyone to use for free. It can also run commands that we speficy in the `.gitpod.yml` file located in the root directory of the git repository, so even the cluster creation can be fully automated.
You can use the same workflow as mentioned in the [Codespaces](#Codespaces) section, however Gitpod doesn't have native support for any kubernetes distribution. The workaround is to create a full QEMU Virtual Machine and deploy [k3s](https://github.com/k3s-io/k3s) inside this VM. Don't worry, this whole process is already fully automated, but due to the workaround the whole workspace may be very slow.

View File

@@ -0,0 +1,20 @@
#!/bin/bash
which kind
if [[ $? != 0 ]]; then
echo 'kind not available in $PATH, installing latest kind'
# Install latest kind
curl -s https://api.github.com/repos/kubernetes-sigs/kind/releases/latest \
| grep "browser_download_url.*kind-linux-amd64" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -qi -
mv kind-linux-amd64 kind && chmod +x kind
fi
cluster_created=$($PWD/kind get clusters 2>&1)
if [[ "$cluster_created" == "No kind clusters found." ]]; then
$PWD/kind create cluster
else
echo "Cluster '$cluster_created' already present"
fi

View File

@@ -0,0 +1,20 @@
#!/bin/bash
kubectl apply -f manifests/setup
# Safety wait for CRDs to be working
sleep 30
kubectl apply -f manifests/
sleep 30
# Safety wait for resources to be created
kubectl rollout status -n monitoring daemonset node-exporter
kubectl rollout status -n monitoring statefulset alertmanager-main
kubectl rollout status -n monitoring statefulset prometheus-k8s
kubectl rollout status -n monitoring deployment grafana
kubectl rollout status -n monitoring deployment kube-state-metrics
kubectl port-forward -n monitoring svc/grafana 3000 > /dev/null 2>&1 &
kubectl port-forward -n monitoring svc/alertmanager-main 9093 > /dev/null 2>&1 &
kubectl port-forward -n monitoring svc/prometheus-k8s 9090 > /dev/null 2>&1 &

View File

@@ -0,0 +1,49 @@
#!/bin/bash
script_dirname="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
rootfslock="${script_dirname}/_output/rootfs/rootfs-ready.lock"
k3sreadylock="${script_dirname}/_output/rootfs/k3s-ready.lock"
if test -f "${k3sreadylock}"; then
exit 0
fi
cd $script_dirname
function waitssh() {
while ! nc -z 127.0.0.1 2222; do
sleep 0.1
done
./ssh.sh "whoami" &>/dev/null
if [ $? -ne 0 ]; then
sleep 1
waitssh
fi
}
function waitrootfs() {
while ! test -f "${rootfslock}"; do
sleep 0.1
done
}
echo "🔥 Installing everything, this will be done only one time per workspace."
echo "Waiting for the rootfs to become available, it can take a while, open the terminal #2 for progress"
waitrootfs
echo "✅ rootfs available"
echo "Waiting for the ssh server to become available, it can take a while, after this k3s is getting installed"
waitssh
echo "✅ ssh server available"
./ssh.sh "curl -sfL https://get.k3s.io | sh -"
mkdir -p ~/.kube
./scp.sh root@127.0.0.1:/etc/rancher/k3s/k3s.yaml ~/.kube/config
echo "✅ k3s server is ready"
touch "${k3sreadylock}"
# safety wait for cluster availability
sleep 30s

View File

@@ -0,0 +1,48 @@
#!/bin/bash
set -euo pipefail
img_url="https://cloud-images.ubuntu.com/hirsute/current/hirsute-server-cloudimg-amd64.tar.gz"
script_dirname="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
outdir="${script_dirname}/_output/rootfs"
rm -Rf $outdir
mkdir -p $outdir
curl -L -o "${outdir}/rootfs.tar.gz" $img_url
cd $outdir
tar -xvf rootfs.tar.gz
qemu-img resize hirsute-server-cloudimg-amd64.img +20G
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --run-command 'resize2fs /dev/sda'
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --root-password password:root
netconf="
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: yes
"
# networking setup
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --run-command "echo '${netconf}' > /etc/netplan/01-net.yaml"
# copy kernel modules
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --copy-in /lib/modules/$(uname -r):/lib/modules
# ssh
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --run-command 'apt remove openssh-server -y && apt install openssh-server -y'
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --run-command "sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config"
sudo virt-customize -a hirsute-server-cloudimg-amd64.img --run-command "sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config"
# mark as ready
touch rootfs-ready.lock
echo "k3s development environment is ready"

View File

@@ -0,0 +1,14 @@
#!/bin/bash
set -xeuo pipefail
script_dirname="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
outdir="${script_dirname}/_output"
sudo qemu-system-x86_64 -kernel "/boot/vmlinuz" \
-boot c -m 3073M -hda "${outdir}/rootfs/hirsute-server-cloudimg-amd64.img" \
-net user \
-smp 8 \
-append "root=/dev/sda rw console=ttyS0,115200 acpi=off nokaslr" \
-nic user,hostfwd=tcp::2222-:22,hostfwd=tcp::6443-:6443 \
-serial mon:stdio -display none

View File

@@ -0,0 +1,3 @@
#!/bin/bash
sshpass -p 'root' scp -o StrictHostKeychecking=no -P 2222 $@

View File

@@ -0,0 +1,3 @@
#!/bin/bash
sshpass -p 'root' ssh -o StrictHostKeychecking=no -p 2222 root@127.0.0.1 "$@"