Install kind on Oracle Linux
Introduction
kind is an open-source tool for running a locally hosted Kubernetes cluster using Podman containers as the cluster nodes. It provides a way for both developers and DevOps administrators to quickly create a Kubernetes cluster on a single machine without requiring the usual complicated and lengthy setup that would entail.
Objectives
In this tutorial, you'll learn how to:
- Install
kubectl
- Install kind
- Use kind to start a single-node Kubernetes cluster
Prerequisites
Minimum of a single Oracle Linux 9 or later system
Each system should have Oracle Linux installed and configured with:
- A non-root user account with sudo access
- Podman and cURL packages
- Cgroups v2
Deploy Oracle Linux
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ol
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e use_podman=true -e update_all=true -e os_version="9"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install the Kubernetes Command Line Tool
kubectl
is a command-line tool for interacting with Kubernetes clusters. kubectl
installs on Linux, macOS, and MS Windows systems. This lab demonstrates installing on an Oracle Linux x86_64 system.
Note: The
kubectl
version must be within one minor version of the Kubernetes version deployed on the cluster. The steps provided here install the latest Kubernetes andkubectl
versions.
Open a terminal and connect via SSH to the ol-node-01 instance.
ssh oracle@<ip_address_of_instance>
Download the latest version of
kubectl
.curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
Note: If a different version of
kubectl
is needed to match an older Kubernetes version you are using in kind, replace the$(curl -L -s https://dl.k8s.io/release/stable.txt)
section of the command shown above with the version-specific URL. For example,curl -LO https://dl.k8s.io/release/v1.27.4
.(Optional) Validate the downloaded binary file.
Download the checksum file.
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
Validate the
kubectl
download against the checksum file.echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
Example Output:
[oracle@ol-node01 ~]$ echo "$(cat kubectl.sha256) kubectl" | sha256sum --check kubectl: OK
Install kubectl.
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Note: If you do not have 'sudo' privileges, please use the steps shown below to install
kubectl
into your~/.local/bin
directory.chmod +x kubectl mkdir -p ~/.local/bin mv ./kubectl ~/.local/bin/kubectl
Then add
~/.local/bin
to your$PATH
Verify the
kubectl
installation.kubectl version --client --output=yaml
Example Output:
[oracle@ol-node01 ~]$ kubectl version --client --output=yaml clientVersion: buildDate: "2024-04-17T17:36:05Z" compiler: gc gitCommit: 7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a gitTreeState: clean gitVersion: v1.30.0 goVersion: go1.22.2 major: "1" minor: "30" platform: linux/amd64 kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
Configure the Host for Rootless Podman
The host running kind requires cgroup v2 and a few additional settings to work with Rootless Podman .
Ensure the kernel version supports cgroup v2.
grep cgroup /proc/filesystems
The output should include the line
cgroup2
.
Verify That Cgroup v2 is Enabled
Note: Oracle Linux 9 ships with cgroup v2 enabled by default.
Check the cgroup controller list.
cat /sys/fs/cgroup/cgroup.controllers
The output should return similar results:
cpuset cpu io memory hugetlb pids rdma
Check the cgroup2 mounted file system.
mount |grep cgroup2
The output should return similar results:
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
Enable Cgroup Controller Delegation
Create the configuration file to enable delegation of the CPU controller.
sudo mkdir /etc/systemd/system/user@.service.d cat << EOF | sudo tee /etc/systemd/system/user@.service.d/delegate.conf > /dev/null [Service] Delegate=yes EOF sudo systemctl daemon-reload
Allow Container Access to Iptables
Configure the iptables modules to load on system boot.
cat << EOF | sudo tee /etc/modules-load.d/iptables.conf > /dev/null ip6_tables ip6table_nat ip_tables iptable_nat EOF sudo systemctl restart systemd-modules-load.service
Verify the kernel modules load.
lsmod|grep -E "^ip_tables|^iptable_filter|^iptable_nat|^ip6"
Reboot
Reboot the instance for the changes to take effect.
sudo systemctl reboot
Note: Wait a few minutes for the instance to restart.
Reconnect to the ol-node-01 instance using SSH.
Install the Release Binaries
Download the latest version.
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
Install kind.
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
Confirm the kind version.
kind version
Create a Cluster
Configure Your Shell.
Running kind with rootless Podman requires an additional environment variable.
cat << EOF >> ~/.bashrc > /dev/null export KIND_EXPERIMENTAL_PROVIDER=podman EOF source ~/.bashrc
Create a single node kind cluster.
kind create cluster
Example Output:
NOTE: If using the free lab environment, the output will look similar to that shown below due to missing font packages on the Luna Desktop. It DOES NOT affect the functionality of kind in any way.
enabling experimental podman provider Creating cluster "kind" ... ��� Ensuring node image (kindest/node:v1.29.2) ������ ��� Preparing nodes ���� � ��� Writing configuration ������ ��� Starting control-plane ��������� ��� Installing CNI ������ ��� Installing StorageClass ������ Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! ����
If using your system, then the output should look similar to this:
[oracle@ol-node01 ~]$ kind create cluster enabling experimental podman provider Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.29.2) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! 😅
Note: It is possible to use a custom name to create the kind cluster by supplying the name to use as shown in this example:
kind create cluster --name <name-of-cluster-to-create>
.Confirm that
kubectl
can connect to the kind-based Kubernetes cluster.kubectl cluster-info --context kind-kind
Example Output:
[oracle@ol-node01 ~]$ kubectl cluster-info --context kind-kind Kubernetes control plane is running at https://127.0.0.1:41691 CoreDNS is running at https://127.0.0.1:41691/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You now have a working kind-based Kubernetes cluster.
The following 'Optional' sections show how to look inside the deployed containers to confirm a full Kubernetes implementation exists and how to create additional single-node kind-based Kubernetes clusters.
If you don't require the 'Optional' sections, please click the "Next" button until you reach the final section ('Delete the kind Clusters').
(Optional) Verify the Kubernetes Environment
Run a few commands to confirm the deployment of a fully functional Kubernetes cluster.
Use kind to check for the details of any running kind clusters.
kind get clusters
Example Output:
[oracle@ol-node01 ~]$ kind get clusters enabling experimental podman provider kind
Confirmation the cluster is running.
Note: Because Podman is running the kind executable, many commands will include a reference to using the 'experimental podman provider', like this:
enabling experimental podman provider
.Check what Podman containers are running.
podman ps -a
Example Output:
[oracle@ol-node01 ~]$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b27757cb244 quay.io/podman/hello:latest /usr/local/bin/po... About an hour ago Exited (0) About an hour ago inspiring_allen fb2bb3f1e6d9 docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72 17 minutes ago Up 17 minutes 127.0.0.1:41691->6443/tcp kind-control-plane
Note: If more nodes exist in the kind cluster then a separate container will be listed for each 'node' of the kind cluster.
Confirm that
kubectl
knows about the newly created cluster.Notice in the previous output that the 'PORTS' heading shows that network traffic on the local machine is being redirected from Port number 41691 to Port number 6443 inside the container, as shown here:
127.0.0.1:41691->6443/tcp
. Before proceeding, let's check thatkubectl
knows about the newly created cluster:grep server ~/.kube/config
Example Output:
[oracle@ol-node01 ~]$ grep server ~/.kube/config server: https://127.0.0.1:41691
Note: The actual Port number is dynamically assigned each time a kind cluster starts.
Look at the kind cluster using the
kubectl get nodes
command.kubectl get nodes
Example Output:
[oracle@ol-node01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 81s v1.29.2
Confirm that you have a fully functional Kubernetes node by getting a list of its Pods.
kubectl get pods -A
Example Output:
[oracle@ol-node01 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5d78c9869d-9xjd5 1/1 Running 0 53s kube-system coredns-5d78c9869d-bmpgs 1/1 Running 0 53s kube-system etcd-kind-control-plane 1/1 Running 0 69s kube-system kindnet-vtxz7 1/1 Running 0 54s kube-system kube-apiserver-kind-control-plane 1/1 Running 0 69s kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 69s kube-system kube-proxy-dq4t7 1/1 Running 0 54s kube-system kube-scheduler-kind-control-plane 1/1 Running 0 69s local-path-storage local-path-provisioner-6bc4bddd6b-z8z55 1/1 Running 0 53s
Confirm that containerd (and not Podman) is running the kind cluster.
kubectl get nodes -o wide
Example Output:
[oracle@ol-node01 ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready control-plane 45m v1.29.2 10.89.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-205.149.5.1.el9uek.x86_64 containerd://1.7.13
Get a more in-depth look at the kind cluster.
kubectl get all -A | more
Example Output:
[oracle@ol-node01 ~]$ kubectl get all -A | more NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-5d78c9869d-5b5ft 1/1 Running 0 18m kube-system pod/coredns-5d78c9869d-h2psz 1/1 Running 0 18m kube-system pod/etcd-kind-control-plane 1/1 Running 0 18m kube-system pod/kindnet-b6x9g 1/1 Running 0 18m kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 18m kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 18m kube-system pod/kube-proxy-lpjpj 1/1 Running 0 18m kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 18m local-path-storage pod/local-path-provisioner-6bc4bddd6b-hjs7m 1/1 Running 0 18m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kindnet 1 1 1 1 1 kubernetes.io/os=linux 18m kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 18m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 2/2 2 2 18m local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 18m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-5d78c9869d 2 2 2 18m local-path-storage replicaset.apps/local-path-provisioner-6bc4bddd6b 1 1 1 18m
(Optional) Check if Kubernetes is Running Inside the Container
The curious user may still be uncertain whether kind has deployed a full-featured Kubernetes cluster inside a container. The following steps outline how to confirm this.
Connect to the container's BASH shell.
podman exec -it kind-control-plane bash
Example Output:
[oracle@ol-node01 ~]$ podman exec -it kind-control-plane bash root@kind-control-plane:/#
crtictl is a command-line interface to inspect and debug container runtimes on a Kubernetes node. Use
crictl
to confirm that all the expected Kubernetes services exist within the kind container.crictl ps
Example Output:
root@kind-control-plane:/# crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD c76a6f0931550 ce18e076e9d4b 25 minutes ago Running local-path-provisioner 0 4b540b5b2f209 local-path-provisioner-6bc4bddd6b-z8z55 7e87927b14a75 ead0a4a53df89 25 minutes ago Running coredns 0 4f418c623d824 coredns-5d78c9869d-9xjd5 e7d5f3489f084 ead0a4a53df89 25 minutes ago Running coredns 0 7c737a7820ccc coredns-5d78c9869d-bmpgs 4f4357edf61c1 b0b1fa0f58c6e 25 minutes ago Running kindnet-cni 0 9be3ac0e411f8 kindnet-vtxz7 b785e2d63fb8e 9d5429f6d7697 25 minutes ago Running kube-proxy 0 fde29e49f009b kube-proxy-dq4t7 4d90a3a20fc04 205a4d549b94d 25 minutes ago Running kube-scheduler 0 ded2754976cd9 kube-scheduler-kind-control-plane bed955e049597 9f8f3a9f3e8a9 25 minutes ago Running kube-controller-manager 0 423ad427221b3 kube-controller-manager-kind-control-plane 5335909a407cb c604ff157f0cf 25 minutes ago Running kube-apiserver 0 51fb09697ae67 kube-apiserver-kind-control-plane 051d8db7eac77 86b6af7dd652c 25 minutes ago Running etcd 0 b9e063633caf6 etcd-kind-control-plane
All the expected Kubernetes file structures also exist, confirming Kubernetes is present.
ls /etc/kubernetes ls /etc/kubernetes/manifests
Example Output:
root@kind-control-plane:/# ls /etc/kubernetes admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
Exit out of the container.
exit
(Optional) Create Another Single-Node kind Cluster
kind allows for concurrently creating and running multiple clusters in parallel.
Create another kind cluster.
Note that it is necessary to use the
--name
parameter this time because a default cluster using the default (kind-kind
) name is present.kind create cluster --name newcluster
Example Output:
[oracle@ol-node01 ~]$ kind create cluster --name newcluster enabling experimental podman provider Creating cluster "newcluster" ... ✓ Ensuring node image (kindest/node:v1.29.2) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-newcluster" You can now use your cluster with: kubectl cluster-info --context kind-newcluster Thanks for using kind! 😊
Confirm a new cluster has started.
Notice that two clusters should be listed -
kind
andnewcluster
.kind get clusters
Example Output:
[oracle@ol-node01 ~]$ kind get clusters enabling experimental podman provider kind newcluster
Check how many containers exist using Podman.
podman ps
Example Output:
[oracle@ol-node01 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 83707b3b7b4c docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72 2 minutes ago Up 2 minutes 127.0.0.1:35735->6443/tcp kind-control-plane fa262f08de76 docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72 9 minutes ago Up 9 minutes 127.0.0.1:41691->6443/tcp newcluster-control-plane
Check how many Kubernetes clusters are in the
.kube/config
file.grep server ~/.kube/config
Example Output:
[oracle@ol-node01 ~]$ grep server ~/.kube/config server: https://127.0.0.1:35735 server: https://127.0.0.1:41691
(Optional) Connect to Both Clusters Using the Kubernetes Command Line Tool.
Confirm that
kubectl
shows both clusters.kubectl get nodes
Example Output:
[oracle@ol-node01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION newcluster-control-plane Ready control-plane 27m v1.29.2
Is something amiss? No, this is expected behavior. If more than one cluster is present, use the
--context
flag to indicate which cluster to connect to.kubectl get nodes --context kind-kind
and
kubectl get nodes --context kind-newcluster
(Optional) Change the Default Cluster Context
Changing the default cluster context to one of the other cluster contexts is possible.
Confirm which cluster is the current default.
kubectl config get-contexts
Example Output:
[oracle@ol-node01 ~]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE kind-kind kind-kind kind-kind * kind-newcluster kind-newcluster kind-newcluster
The asterisk (
*
) indicates the current default context.Switch to the default context of
kind-kind
.kubectl config use-context kind-kind
Example Output:
[oracle@ol-node01 ~]$ kubectl config use-context kind-kind Switched to context "kind-kind".
Confirm the default context now points to the
kind-kind
cluster.kubectl config get-contexts
Example Output:
[oracle@ol-node01 ~]$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kind-kind kind-kind kind-kind kind-newcluster kind-newcluster kind-newcluster
Delete the kind Cluster.
Deleting a cluster is as simple as creating it.
Delete the default cluster.
kind delete cluster
Example Output:
[oracle@ol-node01 ~]$ kind delete cluster enabling experimental podman provider Deleting cluster "kind" ... Deleted nodes: ["kind-control-plane"]
(Optional) Delete the second cluster.
Deleting the second cluster requires using the
--name
option.kind delete cluster --name newcluster
Confirm there are no kind clusters running.
kind get clusters
Example Output:
[oracle@ol-node01 ~]$ kind get clusters enabling experimental podman provider No kind clusters found.
Summary
That completes our demonstration of how to install and run rootless kind on Podman. However, kind has many more features that go beyond the scope of this lab, such as these:
- Use different Kubernetes versions
- Define multiple Control and Worker nodes
- Setup an Ingress Controller
- Define a Metallb load balancer
- Use IPv4 (standard), IPv6 or dual stack clusters
- Work with local or private registries (registries requiring authentication)
- Work with an audit policy
Locate more details in the upstream kind documentation .
In the meantime, many thanks for taking the time to try this lab.