Up to Main Index Up to Annexed Works
HOW-TO: BUILD KUBERNETES CLUSTERS
=================================
INTRODUCTION
‾‾‾‾‾‾‾‾‾‾‾‾
Do you want to quickly and easily setup a Kubernetes cluster from scratch?
Building a working Kubernetes cluster from scratch can be a frustrating
undertaking. I know this from experience setting up clusters for my homelab.
This how-to takes all of the lessons I learned and wraps them in an easy to
run script that will setup control-plane and worker nodes for you. You can run
just a single node (the control-plane node will also be a worker node), or add
additional worker nodes, tainting the control-plane node to not also be a
worker node, if desired.
If you examine the script you’ll see it’s just the commands, configuration,
file changes and setup you would normally do from the command line — if you
were to install everything manually. There are no shortcuts, no minikube, no
kind — just kubelet, kubeadm, kubectl and a few other essentials. A proper
Kubernetes set-up.
__________________
+++ THE SCRIPT +++
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
The script I created is here, click to view: node-setup
Feel free to save the script, modify it, hack on it and make it your own.
The script will create a control-plane node with a Docker registry you can
push to for deployments. The script will install the Kubernetes metrics
server. By default, the Kubernetes dashboard will be installed and exposed
with a long-lived token — unless NO_DASHBOARD is set (See “CUSTOMISING THE
CONTROL-PLANE INSTALL”).
This how-to is about building your own Kubernetes cluster, not a how-to on
using it. That how-to may come later on. For now you are expected to know how
to use a Kubernetes cluster, or to be curious enough to learn on your own.
IMPORTANT! It is NOT recommended that you run this script on your main PC,
turning it into a single node cluster. You can, but it is ill advised. Either
use a spare PC or a virtual machine. You have been warned…
A QUICK OVERVIEW
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
To set-up a control-plane node:
1. Install Debian using the recommended net install ISO image onto a PC or
virtual machine.
2. Copy the node-setup script to the machine.
3. Run the script.
For a single node running the control-plane and pods that’s it!
If you want you can stop here and just use the control-plane node to poke
around and experiment with Kubernetes. Or… you could add some worker nodes.
To setup additional worker nodes, for each node:
1. Install Debian using the recommended net install ISO image onto a PC or
virtual machine.
2. Copy the node-setup script to the machine.
3. Run the script specifying the IP address of the control-plane node, the
Docker password, joining token and hash from the control-plane node.
The rest of this how-to describes the gory details step-by-step so that you
can have your own Kubernetes cluster.
ON INSTALLING DEBIAN TRIXIE
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
This how-to assumes a little familiarity with Debian — not a lot, just enough
to install it. Debian have very good manuals available if you need guidance
https://www.debian.org/releases/trixie/installmanual
To make things as easy and as trouble free as possible, the script has been
tested to work with a specific Debian Trixie net install ISO image (784Mb):
https://cdimage.debian.org/debian-cd/13.2.0/amd64/iso-cd/debian-13.2.0-amd64-netinst.iso
Having said that, some pointers on the installation are warranted:
• The control-panel node will need at least 2 CPU, 5Gb disk space, 2Gb RAM.
• Worker nodes will need at least 2 CPU, 3Gb disk space and 2Gb RAM.
• Make sure to create a normal user when installing Debian for the
control-plane node if you want to use kubectl on the control-plane node
itself. This is not necessary, but can be useful when tinkering.
• Make sure to create a normal user if you want to SSH into the nodes. This
is not necessary, but can be a useful escape hatch when learning.
• When setting up the disk it is advised to not set-up any swap partition, to
disable it or know what you are doing[1]. Kubernetes does not like swapping.
• At the “select and install software” step, select “SSH server” — if wanting
to SSH into the node — and “standard system utilities” only. A desktop
or other applications are not required, save the disk space for the node.
The minimum disk space requirements at the start of this list assume a
minimum Debian install.
The node-setup script should work with other Debian versions and other apt
based Linux distributions, however this has not been tested. The script should
also work with Raspberry Pi running Raspberry Pi OS Lite, although at the time
of writing the author has been too busy to test this yet… a Raspberry Pi 4 or
later with at least 2Gb RAM should make for a nice node, 3 or 4 of them a nice
little cluster :)
SETTING UP A CONTROL-PLANE NODE
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
The control-plane node is the node that controls the cluster.
First of all, setup a PC or virtual machine with Debian using the recommended
net install ISO image.
After the base Debian install you can take a backup. This is handy if you are
experimenting as you can just restore the backup and get back to a fresh
install quickly.
Once your machine is setup, login to the console. This can either be as root,
or a normal user who then “su -” or “sudo bash” to get a root shell.
Next, get a copy of the node-setup script and save it to the node. This can be
a saved copy of the script or you can download it directly if the node has
internet access:
wget https://www.wolfmud.org/annex/node-setup
Make the script executable:
chmod u+x ./node-setup
Run the script:
./node-setup
When the script finishes the cluster should be up and running, ready to use.
The script will have created node-setup.log and cluster-create.log in the
current directory. The node-setup.log file contains IMPORTANT information for
accessing the cluster, Docker registry and Kubernetes dashboard. Copy this
file somewhere safe, preferably to your main machine.
CUSTOMISING THE CONTROL-PLANE INSTALL
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
The node-setup script takes a number of environment variables that let you
customise the installation of the control-plane node:
• USERNAME
The name of an EXISTING user (created when installing Debian) to setup
for using kubectl and SSH.
• SSH_PUB
SSH public key to add to USERNAME’s authorized_hosts file. SSH_PUB
should contain the content of e.g. .ssh/id_rsa.pub and NOT the name
of the file.
• NO_DASHBOARD
If set to any value, e.g. 1, then the Kubernetes dashboard will not be
installed and exposed. This also means Helm is not required and will not
be installed.
• DOCKER_USER
The name of the docker registry user to setup (default: admin).
• DOCKER_PASS
The password to use for the DOCKER_USER. If not specified a random
password will be generated and available in the node-setup.log file.
• KUBERNETES_VER
Version of Kubernetes to install (default: 1.34).
• PAUSE_VER
Version of pause image to use (default: 3.10.1). Must be compatible with
the version of Kubernetes chosen.
• HELM_VER
Version of Helm to install (default: 4.0.0).
LOG FILE INFORMATION
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Instead of needing dozens of commands to pick out the useful information you
need, the script includes it at the end of the node-setup.log file for you.
NOTE: In the following example some values have been truncated and your IP
addresses will probably be different:
=== Access Information ===
Dashboard URL............: https://172.16.1.12:30443
Dashboard token..........: eyJhbGciOiJSUzI1NiIsImtpZCI6Im02cV80Q2g3dUg1N2V…
Docker registry URL......: 172.16.1.12:5000
Docker registry user.....: admin
Docker registry password.: K4t8kTR3n3c85inGQp0y
Cluster join token.......: 0qttpe.3ze2vxgxuguqkvmq
Cluster join hash........: ef75d6cbeed2b74e7a54d2b45bde05643e5bf69be2964a41…
=== kubectl admin.conf ===
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR…
server: https://control-node:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVE…
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQk…
WORKER NODE SETUP DETAILS
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
First of all, setup a PC or virtual machine with Debian using the recommended
net install ISO image. Or clone the control-plane node’s backup. If you clone
the control-plane node’s backup for creating additional worker nodes, just
remember to change[2] the hostname and IP address on the clones :)
Worker nodes must be able to ping the control-plane node on the network.
VERY IMPORTANT: If you are using virtual machines, make sure all the network
adapters have unique MAC addresses. This will save you hours of frustration!
Once your machine is setup, login to the console. This can either be as root,
or a normal user who then “su -” or “sudo bash” to get a root shell.
Next, get a copy of the node-setup script and save it to the node. This can be
a saved copy of the script or you can download it directly if the node has
internet access:
wget https://www.wolfmud.org/annex/node-setup
Make the script executable:
chmod u+x ./node-setup
Run the script, substituting the values from the “Access Information” section
of YOUR copy of the control-plane’s node-setup.log file:
CNODE_IP=<control-plane IP> \
DOCKER_PASS=<Docker registry password> \
JOIN_TOKEN=<Cluster join token> \
JOIN_HASH=<Cluster join hash> \
./node-setup
As an example, if we substitute values from the “Access Information” section
of the example node-setup.log above, the command would look like this:
CNODE_IP=172.16.1.12 \
DOCKER_PASS=K4t8kTR3n3c85inGQp0y \
JOIN_TOKEN=0qttpe.3ze2vxgxuguqkvmq \
JOIN_HASH=ef75d6cbeed2b74e7a54d2b45bde05643e5bf69be2964a419e87ca7… \
./node-setup
When the script finishes running the node should have been added to the
cluster. The script will have created node-setup.log and cluster-join.log in
the current directory.
Login to the control-plane node, NOT the worker node you just setup!, as the
user you created, to check on your nodes. You can log into the console of the
control-plane node or via SSH if you set it up. It may take a moment for the
worker node to become ready, just re-run the command until the status shows
‘ready’:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-node Ready control-plane 2d21h v1.34.1
worker1-node Ready <none> 2d21h v1.34.1
>
You can also use the section “ACCESSING THE CLUSTER FROM ANOTHER MACHINE”
below, to check on your nodes.
CUSTOMISING THE WORKER INSTALL
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
The node-setup script takes a number of environment variables that let you
customise the installation of worker nodes:
• CNODE_IP
The name or IP address of the control-plane node. This is a required
field.
• USERNAME
The name of an EXISTING user (created when installing Debian) to setup
for using SSH.
• SSH_PUB
SSH public key to add to USERNAME’s authorized_hosts file. SSH_PUB
should contain the content of e.g. .ssh/id_rsa.pub and NOT the name
of the file.
• DOCKER_USER
The name of the docker registry user to setup (default: admin). This
must be the same user as used for the control-plane setup, as shown in
the control-plane’s node-setup.log file.
• DOCKER_PASS
The password to use for the DOCKER_USER. This must be the same password,
specified or generated, as shown in the control-plane’s node-setup.log
file. This is a required field.
ACCESSING THE CLUSTER FROM ANOTHER MACHINE
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
In order to access the cluster from another machine, for example your desktop,
you need to install kubectl and configure the cluster’s context. The version
of kubectl you install should match the version of Kubernetes the cluster is
running. By default this is v1.34 currently. For this reason I highly suggest
you follow the official documentation[3] for Linux, Windows and macos. You can
finish with the official documentation once you get to the section “Verify
kubectl configuration”.
Once you have kubectl installed you need to configure it for your cluster.
First create yourself a .kube directory:
mkdir ~/.kube
Next you want to place a file called ‘config’ in that directory. You can copy
it from the control-plane node if you setup SSH access, replace ‘control-node’
with the machine’s name or IP address:
scp control-node:~/.kube/config ~/.kube/config
Or as an alternative, you can copy and paste the config from the last section
of the control-node’s node-setup.log file.
You should then be able to see the nodes in the cluster from your machine.
Note your names for the nodes will probably be different:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
control-node Ready control-plane 2d21h v1.34.1
worker1-node Ready <none> 2d21h v1.34.1
>
TAINTING THE CONTROL-PLANE NODE
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
By default the node-setup script will untaint the control-plane node so that
it is also a worker node. This can be useful for a small cluster such as for a
homelab where you only have a few nodes. However, the control-plane is usually
tainted and runs as it’s own node with separate worker nodes. Note that a
tainted control-plane node can be much smaller than the worker nodes, with few
CPU cores (minimum 2) and less memory (2Gb is usually enough).
These commands can be run on the control-plane node, or if you have setup
kubectl locally (see: “ACCESSING THE CLUSTER FROM ANOTHER MACHINE”) from your
local machine.
To taint the control-plane node so that it is NOT also a worker node:
kubectl taint node <name> node-role.kubernetes.io/control-plane:NoSchedule
If you later want to untaint the control-plane node so that it is also a
worker node:
kubectl taint node <name> node-role.kubernetes.io/control-plane:NoSchedule-
Note the hyphen at the end of the second command. For both commands replace
<name> with either the DNS name of the node or its IP address.
You can check a node for current taints using:
kubectl get node <name> -o=jsonpath='{.spec.taints}'
If the node is tainted (not a worker node) you will see something like:
[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]
FINAL THOUGHTS
‾‾‾‾‾‾‾‾‾‾‾‾‾‾
That’s the end of this how-to. If you followed the instructions you should now
have your own Kubernetes cluster up and running. It doesn’t matter if it is
just a single node or multiple nodes, physical machines or virtual.
If on the other hand you are having difficulties, or have comments or ideas on
improving this how-to or the script, drop me an email! diddymus@wolfmud.org
--
Diddymus
[1] Installing kubeadm, swap configuration, official documentation:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#swap-configuration
[2] Edit the files /etc/hostname, /etc/hosts and /etc/network/interfaces.
[3] Installing kubectl, official documentation:
https://kubernetes.io/docs/tasks/tools/#kubectl
Up to Main Index Up to Annexed Works