Kubernetes Basics #11 - Kubeadm

Last Edited: 6/23/2025

This blog post introduces kubeadm in Kubernetes.

DevOps

So far, we've discussed how to properly set up a production-ready cluster with appropriate security and monitoring. However, we've always utilized Minikube, which is for local testing and development of a single-node cluster, and we will most likely need to set up multi-node clusters on remote servers for production. Therefore, in this article, we'll cover the basics of kubeadm for setting up a minimum viable/production-ready multi-node Kubernetes cluster.

Installing Kubeadm

Kubernetes requires a compatible host (many Linux distributions are compatible), 2GB or more RAM per node, and at least 2 CPUs for the control plane machine or master node. All nodes must also have unique MAC addresses and product UUIDs (verifiable using ifconfig -a and via /sys/class/dmi/id/product_uuid respectively) and have the required ports opened (verifiable with nc 127.0.0.1 6443 -zv -w 2).

Then, all machines need to have a container runtime installed to run containers in pods, which can be done by installing Docker (instructions for how to install Docker are accessible via link). To set up a Kubernetes cluster with kubeadm, we need to install kubeadm, kubelet, and kubectl on all machines. You can find instructions on how to install them and their dependencies for Debian and Red Hat-based Linux distributions here.

Setting Up a Cluster

To initialize the control plane, we use the kubeadm init <args> command. Depending on the container runtime and CNI, we must provide the corresponding arguments in <args>. For example, we can decide to use Calico as a CNI for setting up network policies, which requires us to set the --pod-network-cidr argument to 192.168.0.0/16 and then install it with kubectl apply. By default, the control plane runs only on one node and is tainted so that pods cannot be scheduled on it. (You may introduce more control planes for high availability, which can be done by following the instructions here.)

After running kubeadm init, it should produce output at the end for joining the worker nodes to the control plane, starting with kubeadm join. We can copy and paste this into the worker node and run it to join the worker node to the cluster. Once all nodes are configured (via SSH for remote servers), we can set up images, configure resources, and apply them with kubectl apply, as we've been doing with Minikube.

Example Cluster Setup

After initializing a multi-node cluster with kubeadm, we can utilize all the resources and tools regarding security, monitoring, CI/CD, and so on that we've discussed so far to set up a secure, scalable, and maintainable production-ready cluster. Below is a visualization of an example production-ready cluster utilizing Helm, persistent volumes, horizontal pod scaling, Prometheus, ArgoCD, and Gateway API.

Production Cluster Example

The visualization omits pods, deployments, control managers, schedulers, and other components like Envoy proxy, authorization policies, virtual services, destination rules for service mesh, HTTPRoutes for Gateway API, roles, role bindings, cluster roles, service accounts for RBAC, taints, tolerations, node affinity, request and limit configurations for pod scheduling, and so on. We can also consider taking a more sophisticated approach, such as using different namespaces for services and/or stages and using Helm, Kustomize, and ArgoCD to change configurations across stages and version them.

As we can see from the above, it's complicated and requires tremendous effort and expertise to set up and administer the cluster properly. This is why some prefer Docker Compose or monoliths, especially for smaller systems. However, systems at large scale are inherently complicated and hard to develop and maintain, and Kubernetes and other related tools are specifically designed to offer implementations and abstractions that make it easier to work with those systems properly when learned and applied effectively. Hence, even though the initial learning curve may be steep, it's highly likely to be worthwhile to invest some time and effort to master it.

Conclusion

In this article, we covered how to set up a production multi-node cluster with kubeadm and an example cluster setup using resources and tools that we've discussed so far. This article wraps up the Kubernetes series (at least temporarily), as we've covered all the fundamentals of Kubernetes, though we might cover related topics in the future. Nonetheless, I recommend trying to use Kubernetes and other related tools as much as possible in various settings for practice.

Resources