A complete kubernetes cluster on hetzner cloud.
Using Ubuntu, containerd, kubeadm, a floating IP and a load balancer. 3 nodes. Monthly costs: around 50 EUR.
The goal: adapt setup script and use ONE one call to get a functioning cluster.
Lets look at all the necessary steps before introducing the script.
See https://community.hetzner.com/tutorials/install-kubernetes-cluster
First, install hcloud on mac with brew – it helps to create cloud resources.
chrisp:~ sandorm$ hcloud
A command-line interface for Hetzner Cloud
Usage:
hcloud [command]
Available Commands:
all Commands that apply to all resources
certificate Manage certificates
completion Output shell completion code for the specified shell
context Manage contexts
datacenter Manage datacenters
firewall Manage Firewalls
floating-ip Manage Floating IPs
help Help about any command
image Manage images
iso Manage ISOs
load-balancer Manage Load Balancers
load-balancer-type Manage Load Balancer types
location Manage locations
network Manage networks
placement-group Manage Placement Groups
primary-ip Manage Primary IPs
server Manage servers
server-type Manage server types
ssh-key Manage SSH keys
version Print version information
volume Manage Volumes
Active token needed for hcloud. Create one in hetzner cloud UI

chrisp:Jobs sandorm$ hcloud context create testk8s # input token
Context testk8s created and activated
# make sure token is read/write (a UI option)
chrisp:Jobs sandorm$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
hcloud: not allowed because token is readonly (token_readonly)
chrisp:Jobs sandorm$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
Network 3835091 created
# create server - oops, need subnet first
hcloud server create --type cx11 --name master-1 --image ubuntu-22.04 --ssh-key sandorm@chrisp --network kubernetes
hcloud: network 3835091 has no free IP available or is in a different network zone (invalid_input)
chrisp:Jobs sandorm$ hcloud network add-subnet kubernetes --network-zone eu-central --type server --ip-range 10.98.0.0/16
600ms [==================================] 100.00%
Subnet added to network 3835091
chrisp:Jobs sandorm$ hcloud server create --type cx11 --name master-1 --image ubuntu-22.04 --ssh-key sandorm@chrisp --network kubernetes
5.7s [===================================] 100.00%
Waiting for server 42775911 to have started
... done
Waiting for action attach_to_network to have finished
... done
Server 42775911 created
IPv4: 37.27.3.135
IPv6: 2a01:4f9:c012:c094::1
IPv6 Network: 2a01:4f9:c012:c094::/64
Private Networks:
- 10.98.0.2 (kubernetes)
chrisp:Jobs sandorm$ hcloud context create testk8s # input token
Context testk8s created and activated
# make sure token is read/write (a UI option)
chrisp:Jobs sandorm$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
hcloud: not allowed because token is readonly (token_readonly)
chrisp:Jobs sandorm$ hcloud network create --name kubernetes --ip-range 10.98.0.0/16
Network 3835091 created
# create server - oops, need subnet first
hcloud server create --type cx11 --name master-1 --image ubuntu-22.04 --ssh-key sandorm@chrisp --network kubernetes
hcloud: network 3835091 has no free IP available or is in a different network zone (invalid_input)
chrisp:Jobs sandorm$ hcloud network add-subnet kubernetes --network-zone eu-central --type server --ip-range 10.98.0.0/16
600ms [==================================] 100.00%
Subnet added to network 3835091
chrisp:Jobs sandorm$ hcloud server create --type cx11 --name master-1 --image ubuntu-22.04 --ssh-key sandorm@chrisp --network kubernetes
5.7s [===================================] 100.00%
Waiting for server 42775911 to have started
... done
Waiting for action attach_to_network to have finished
... done
Server 42775911 created
IPv4: 37.27.3.135
IPv6: 2a01:4f9:c012:c094::1
IPv6 Network: 2a01:4f9:c012:c094::/64
Private Networks:
- 10.98.0.2 (kubernetes)
For easy access, on my mac I add the 37.27.3.35 master-1 to /etc/hosts, and make an entry in ~/.ssh/config to automatically login as root. Added an alias hm (hetzner master) to my shell aliases…
Create a floating IP, location Nürnberg
hcloud floating-ip create --type ipv4 --home-location nbg1
Floating IP 50572292 created
IPv4: 116.203.13.35

Login to server, apt update; apt-get dist-upgrade; reboot => VERSION=“22.04.3 LTS (Jammy Jellyfish)“
For now, we have a private (10.98.0.2) and a public IP (37.27.3.135)
root@master-1:~# ifconfig
ens10: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.98.0.2 netmask 255.255.255.255 broadcast 10.98.0.2
inet6 fe80::8400:ff:fe75:ac5e prefixlen 64 scopeid 0x20<link>
ether 86:00:00:75:ac:5e txqueuelen 1000 (Ethernet)
RX packets 1 bytes 350 (350.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12 bytes 1208 (1.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 37.27.3.135 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 2a01:4f9:c012:c094::1 prefixlen 64 scopeid 0x0<global>
inet6 fe80::9400:2ff:fef8:9fc8 prefixlen 64 scopeid 0x20<link>
ether 96:00:02:f8:9f:c8 txqueuelen 1000 (Ethernet)
RX packets 808 bytes 78464 (78.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 640 bytes 106981 (106.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 84 bytes 6594 (6.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 84 bytes 6594 (6.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Will configure the floating IP (116.203.13.35) to come up on the master (Edited: we need the floating IP only on each worker)
# not working !
mkdir -p /etc/network/interfaces.d
cat > /etc/network/interfaces.d/60-floating-ip.cfg << EOF
auto eth0:1
iface eth0:1 inet static
address 116.203.13.35
netmask 32
EOF
Older documentation suggested this, but see https://docs.hetzner.com/de/cloud/floating-ips/persistent-configuration/ – this works:
cat > /etc/netplan/60-floating-ip.yaml <<EOF
network:
version: 2
renderer: networkd
ethernets:
eth0:
addresses:
- 116.203.13.35/32
EOF
netplan apply
Works, reported at login
System information as of Wed Jan 31 02:10:06 PM UTC 2024
System load: 0.1513671875
Usage of /: 14.5% of 18.45GB
Memory usage: 8%
Swap usage: 0%
Processes: 98
Users logged in: 0
IPv4 address for ens10: 10.98.0.2
IPv4 address for eth0: 116.203.13.35
IPv4 address for eth0: 37.27.3.135
IPv6 address for eth0: 2a01:4f9:c012:c094::1
Node, original documentation says to configure the floating IP on all WORKER nodes (only).
Time to create worker-1
chrisp:Jobs sandorm$ hcloud server create --type cx21 --name worker-1 --image ubuntu-22.04 --ssh-key sandorm@chrisp --network kubernetes
6.0s [===================================] 100.00%
Waiting for server 42779172 to have started
... done
Waiting for action attach_to_network to have finished
... done
Server 42779172 created
IPv4: 135.181.42.52
IPv6: 2a01:4f9:c012:a955::1
IPv6 Network: 2a01:4f9:c012:a955::/64
Private Networks:
- 10.98.0.3 (kubernetes)
Same for worker-2, do apt upgrade…Configure floating IP. SAME floating ip on both workers !?
For the setup to actually work, the IP address needs to be configured on all worker nodes
https://community.hetzner.com/tutorials/install-kubernetes-cluster
Make sure, we inform kubelet about using an external cloud provider, on all 3 (master, worker) do this
mkdir -p /etc/systemd/system/kubelet.service.d
cat > /etc/systemd/system/kubelet.service.d/20-hetzner-cloud.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"
EOF
Service does not exist yet, so use mkdir -p.
Download, install containerd:
cd ; wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
mv containerd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl status containerd
# for amd64 ! current is 1.7.12 (2.0.0 is beta)
wget https://github.com/containerd/containerd/releases/download/v1.7.12/containerd-1.7.12-linux-amd64.tar.gz
tar Czxvf /usr/local containerd-1.7.12-linux-amd64.tar.gz
cd ; wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
mv containerd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl status containerd
# for amd64 ! current is 1.7.12 (2.0.0 is beta)
wget https://github.com/containerd/containerd/releases/download/v1.7.12/containerd-1.7.12-linux-amd64.tar.gz
tar Czxvf /usr/local containerd-1.7.12-linux-amd64.tar.gz
systemctl enable --now containerd
systemctl status containerd
Next, we need runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
https://github.com/opencontainers/runc
runc
is a CLI tool for spawning and running containers on Linux according to the OCI specification.
And CNI
wget https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
mkdir -p /opt/cni/bin
tar Czxvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
Containerd setup, use systemd cgroup driver with runc:
mkdir -p /etc/containerd/
containerd config default | sudo tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
systemctl restart containerd; systemctl status containerd
Prepare kubernetes download – it is NOT jammy but xenial repo we need:
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://packages.cloud.google.com/apt/ kubernetes-xenial main
EOF
cat /etc/apt/sources.list.d/kubernetes.list
# check
root@worker-2:~# apt info kubeadm kubectl kubelet
Package: kubeadm
Version: 1.28.2-00
Priority: optional
Section: misc
Maintainer: Kubernetes Authors <kubernetes-dev+release@googlegroups.com>
Installed-Size: 50.8 MB
Depends: kubelet (>= 1.19.0), kubectl (>= 1.19.0), kubernetes-cni (>= 1.1.1), cri-tools (>= 1.25.0)
Homepage: https://kubernetes.io
Download-Size: 10.3 MB
APT-Sources: http://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages
Description: Kubernetes Cluster Bootstrapping Tool
The Kubernetes command line tool for bootstrapping a Kubernetes cluster.
Package: kubectl
Version: 1.28.2-00
Priority: optional
Section: misc
Maintainer: Kubernetes Authors <kubernetes-dev+release@googlegroups.com>
Installed-Size: 49.9 MB
Homepage: https://kubernetes.io
Download-Size: 10.3 MB
APT-Sources: http://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages
Description: Kubernetes Command Line Tool
The Kubernetes command line tool for interacting with the Kubernetes API.
Package: kubelet
Version: 1.28.2-00
Priority: optional
Section: misc
Maintainer: Kubernetes Authors <kubernetes-dev+release@googlegroups.com>
Installed-Size: 111 MB
Depends: iptables (>= 1.4.21), kubernetes-cni (>= 1.1.1), iproute2, socat, util-linux, mount, ebtables, ethtool, conntrack
Homepage: https://kubernetes.io
Download-Size: 19.5 MB
APT-Sources: http://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages
Description: Kubernetes Node Agent
The node agent of Kubernetes, the container cluster manager
N: There are 1007 additional records. Please use the '-a' switch to see them.
Finally, we are ready to install the kubernetes packages.
apt-get update
apt-get install kubeadm kubectl kubelet
some more settings on all nodes
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat <<EOF | tee /etc/sysctl.d/k8s.conf
# Allow IP forwarding for kubernetes
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
EOF
sysctl --system
Now, to be sure, make a check script to verify settings and let it run. Next is control plane install.
Go on with https://community.hetzner.com/tutorials/install-kubernetes-cluster – Step 3.3 Setup Control Plane
kubeadm config images pull
# on master only
master$ kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.27.1 \
--ignore-preflight-errors=NumCPU \
--upload-certs \
--apiserver-cert-extra-sans 10.0.0.1
The --apiserver-cert-extra-sans
parameter in kubeadm
is used to specify additional Subject Alternative Names (SANs) for the API server’s certificate. SANs are used in TLS certificates to indicate additional host names or IP addresses for which the certificate is valid. Including extra SANs can be useful in various scenarios, especially when your Kubernetes cluster’s API server needs to be accessed using different names or IP addresses.
The --pod-network-cidr
setting in kubeadm
is used to specify the range of IP addresses for the Pod network in your Kubernetes cluster. The Pod network is the network used for communication between Pods within the cluster. 10.244.0.0/16 is the default.
Try to start installing, preflight errors:
root@master-1:~# kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.27.1 \
--ignore-preflight-errors=NumCPU \
--upload-certs \
--apiserver-cert-extra-sans 10.0.0.1
[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.28.2" Control plane version: "1.27.1"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Check available versions of kubelet, downgrade or use –
apt list -a kubelet
...
kubelet/kubernetes-xenial 1.27.1-00 amd64
apt remove kubelet
apt install kubelet=1.27.1-00
Run kubeadm
root@master-1:~# bash -x do10.sh
+ kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.28.2 --ignore-preflight-errors=NumCPU --upload-certs --apiserver-cert-extra-sans 10.0.0.1
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0131 17:04:22.278500 10016 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1] and IPs [10.96.0.1 116.203.13.35 10.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1] and IPs [116.203.13.35 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1] and IPs [116.203.13.35 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.509373 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
00ce7d1702ee599a942b5c02c1587cdbda6bc72464159115e3e5ec01342e0409
[mark-control-plane] Marking the node master-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: aqsff8.dm6q98vpd0askqph
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 116.203.13.35:6443 --token XXXXXX.YYYYYYYYYYY \
--discovery-token-ca-cert-hash sha256:14680655ee0b3624f5da01eb11cbce66995038445de5a4668f043ea6ca0fc848
...
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 116.203.13.35:6443 --token axxxxx.yyyyyyyyyyyy \
--discovery-token-ca-cert-hash sha256:14680655ee0b3624f5da01eb11cbce66995038445de5a4668f043ea6ca0fc848
# token xxx-ed out
Kubernetes control plane is running ! The kubeadm join command uses the floating IP of the master node.
But is everything fine ? No, the coredns pods are not running, although we added a toleration:
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 49s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
Try removing taint, then install the hetzner cloud manager.
root@master-1:~# kubectl taint nodes master-1 node.kubernetes.io/not-ready-
node/master-1 untainted
# a different errror gives hope
Warning FailedMount 22s (x7 over 53s) kubelet MountVolume.SetUp failed for volume "config-volume" : object "kube-system"/"coredns" not registered
Warning NetworkNotReady 20s (x18 over 54s) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
# CNI comes next...
# download hetzner cloud manager yaml, then apply
curl -o ccm-networks.yaml https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/ccm-networks.yaml
kubectl apply -f ccm-networks.yaml
serviceaccount/hcloud-cloud-controller-manager created
clusterrolebinding.rbac.authorization.k8s.io/system:hcloud-cloud-controller-manager created
deployment.apps/hcloud-cloud-controller-manager created
# download flannel CNI and apply (could be cilium,...)
Next is the network plugin – here we use flannel.
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
Hurra ! coredns now started running – when the CNI was applied to the cluster.
k get pods
NAME READY STATUS RESTARTS AGE
coredns-7b5bf49d48-9xxxx 1/1 Running 0 11m
We can continue and join worker-1 now.
What else ? We should look more into floating IP concepts (it’s a VIP, really)…And, in another project I used cilium as CNI, can check this out here on the next run. And, automation should be done with one setup script, or terraform, or whatever tool. Furthermore, the hetzner firewall, metallb and a loadbalancer are down the road.
Helm is nice, too
# download helm binary on master-1
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
./get_helm.sh
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-controller ingress-nginx/ingress-nginx --debug
helm uninstall ingress-controller
Fore ease of administration, install k9s, on master1.
mkdir ~/k9s-installation
cd ~/k9s-installation
curl -LO https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_amd64.tar.gz
tar xf k9s_Linux_amd64.tar.gz
sudo mv k9s /usr/local/bin

What next?
- allow kubectl from mac instead of login to master
- put all into one script, test run