Install kind Using Rootless Podman on Oracle Linux

0
0
Send lab feedback

Install kind Using Rootless Podman on Oracle Linux

Introduction

kind is an open-source tool for running a locally hosted Kubernetes cluster using Podman containers as the cluster nodes. It provides a way for both developers and DevOps administrators to quickly create a Kubernetes cluster on a single machine without requiring the usual complicated and lengthy setup that would entail.

Objectives

In this lab, you'll learn how to:

  • Install Podman
  • Install kubectl
  • Install kind
  • Start kind using Rootless Podman
  • Use kind to start a single-node Kubernetes cluster (control plane and worker on the same node)

Prerequisites

  • A running instance of Oracle Linux 9

Verify the Lab Environment

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

  1. Open a terminal and connect via ssh to the ol-node01 instance.

    ssh oracle@<ip_address_of_ol_node01>

Install Podman

  1. Install Podman.

    sudo dnf install -y container-tools
  2. Confirm the Podman version.

    podman version

    Example Output:

    [oracle@ol-node01 ~]$ podman version
    Client:       Podman Engine
    Version:      4.9.4-rhel
    API Version:  4.9.4-rhel
    Go Version:   go1.21.9 (Red Hat 1.21.9-2.el9_4)
    Built:        Fri May  3 09:46:34 2024
    OS/Arch:      linux/amd64

Confirm Podman Works

A quick test confirms that Podman works.

podman run quay.io/podman/hello

Example Output:

[oracle@ol-node01 ~]$ podman run quay.io/podman/hello
Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob d62784b1512e done  
Copying config dd579b6b41 done  
Writing manifest to image destination
Storing signatures
!... Hello Podman World ...!

         .--"--.           
       / -     - \         
      / (O)   (O) \        
   ~~~| -=(,Y,)=- |         
    .---. /`  \   |~~      
 ~/  o  o \~~~~.----. ~~   
  | =(X)= |~  / (O (O) \   
   ~~~~~~~  ~| =(Y_)=-  |   
  ~~~~    ~~~|   U      |~~ 

Project:   https://github.com/containers/podman
Website:   https://podman.io
Desktop:   https://podman-desktop.io
Documents: https://docs.podman.io
YouTube:   https://youtube.com/@Podman
X/Twitter: @Podman_io
Mastodon:  @Podman_io@fosstodon.org

Install and Validate the Kubernetes Command Line Tool

kubectl is a command-line tool for interacting with Kubernetes clusters. kubectl installs on Linux, macOS, and MS Windows systems. This lab demonstrates installing on an Oracle Linux x86_64 system.

Note: The kubectl version must be within one minor version of the Kubernetes version deployed on the cluster. The steps provided here install the latest versions of both Kubernetes and kubectl.

  1. Download the latest version of kubectl.

    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

    Note: If a different version of kubectl is needed to match an older Kubernetes version you are using in kind, replace the $(curl -L -s https://dl.k8s.io/release/stable.txt) section of the command shown above with the version-specific URL. For example, curl -LO https://dl.k8s.io/release/v1.27.4.

  2. (Optional) Validate the downloaded binary file.

    1. Download the checksum file.

      curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
    2. Validate the kubectl download against the checksum file.

      echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check

      Example Output:

      [oracle@ol-node01 ~]$ echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
      kubectl: OK
  3. Install kubectl.

    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

    Note: If you do not have 'sudo' privileges, please use the steps shown below to install kubectl into your ~/.local/bin directory.

    chmod +x kubectl
    mkdir -p ~/.local/bin
    mv ./kubectl ~/.local/bin/kubectl

    Then add ~/.local/bin to your $PATH

  4. Verify the kubectl installation.

    kubectl version --client --output=yaml

    Example Output:

    [oracle@ol-node01 ~]$ kubectl version --client --output=yaml
    clientVersion:
      buildDate: "2024-04-17T17:36:05Z"
      compiler: gc
      gitCommit: 7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a
      gitTreeState: clean
      gitVersion: v1.30.0
      goVersion: go1.22.2
      major: "1"
      minor: "30"
      platform: linux/amd64
    kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
  5. Configure Your Shell.

    Using rootless Podman to run kind needs an environment variable to be declared. Add it to the BASH environment.

    cat << EOF >> ~/.bashrc
    
    export KIND_EXPERIMENTAL_PROVIDER=podman
    EOF
    
  6. Source the ~/.bashrc file to pick up the change.

    source ~/.bashrc

Ensure the Host is Running with Control Group v2

kind requires cgroup v2 to be configured in a specific manner to work. The following steps are necessary to accomplish this.

  1. Check if cgroup v2 is already configred.

    If /sys/fs/cgroup/cgroup.controllers exists, the system is already running with cgroup v2.

    ls -al /sys/fs/cgroup

    Otherwise, enable cgroup v2 using grubby.

    sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"
  2. Create the configuration file to enable delegation of the CPU controller.

    sudo mkdir /etc/systemd/system/user@.service.d
    
  3. Create a system-wide systemd user unit file to enable cgroup controller delegation.

    cat << EOF | sudo tee /etc/systemd/system/user@.service.d/delegate.conf > /dev/null
    [Service]
    Delegate=yes
    EOF
    
  4. Reload the systemd daemon.

    sudo systemctl daemon-reload
    
  5. Reboot the instance for the changes to take effect.

    sudo systemctl reboot

    Note: Wait a few minutes for the instance to restart.

  6. Reconnect to the ol-nodel01 instance using ssh.

Enable Access to Iptables for Containers

  1. Configure the iptables modules to load on system boot.

    cat << EOF | sudo tee /etc/modules-load.d/iptables.conf > /dev/null
    ip6_tables
    ip6table_nat
    ip_tables
    iptable_nat
    EOF
    
  2. Reload the kernel modules.

    sudo systemctl restart systemd-modules-load.service
  3. Verify the kernel modules load.

    lsmod|grep -E "^ip_tables|^iptable_filter|^iptable_nat|^ip6"
    

Download and Install kind

Install, and configure kind.

  1. Download the latest version of kind.

    [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64

    Example Output:

    [oracle@ol-node01 ~]$ [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100    97  100    97    0     0   1010      0 --:--:-- --:--:-- --:--:--  1010
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 6245k  100 6245k    0     0  7691k      0 --:--:-- --:--:-- --:--:-- 17.3M
    
  2. Install kind.

    sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
  3. Confirm the kind version.

    kind version

    Example Output:

    [oracle@ol-node01 ~]$ kind version
    kind v0.22.0 go1.20.13 linux/amd64
    

Confirm kind Works

  1. Create a single node kind cluster.

    kind create cluster

    Example Output:

    NOTE: If using the free lab environment, the output will look similar to that shown below due to missing font packages on the Luna Desktop. It DOES NOT affect the functionality of kind in any way.

    [oracle@ol-node01 ~]$ kind create cluster
    enabling experimental podman provider
    Creating cluster "kind" ...
     ��� Ensuring node image (kindest/node:v1.29.2) ������ 
     ��� Preparing nodes ���� �  
     ��� Writing configuration ������ 
     ��� Starting control-plane ��������� 
     ��� Installing CNI ������ 
     ��� Installing StorageClass ������ 
    Set kubectl context to "kind-kind"
    You can now use your cluster with:
    
    kubectl cluster-info --context kind-kind
    
    Have a nice day! ����

    If using your system, then the output should look similar to this:

    [oracle@ol-node01 ~]$ kind create cluster
    enabling experimental podman provider
    Creating cluster "kind" ...
     ✓ Ensuring node image (kindest/node:v1.29.2) 🖼
     ✓ Preparing nodes 📦  
     ✓ Writing configuration 📜 
     ✓ Starting control-plane 🕹️ 
     ✓ Installing CNI 🔌 
     ✓ Installing StorageClass 💾 
    Set kubectl context to "kind-kind"
    You can now use your cluster with:
    
    kubectl cluster-info --context kind-kind
    
    Have a nice day! 😅

    Note: It is possible to use a custom name to create the kind cluster by supplying the name to use as shown in this example: kind create cluster --name <name-of-cluster-to-create>.

  2. Confirm that kubectl can connect to the kind-based Kubernetes cluster.

    kubectl cluster-info --context kind-kind

    Example Output:

    [oracle@ol-node01 ~]$ kubectl cluster-info --context kind-kind
    Kubernetes control plane is running at https://127.0.0.1:41691
    CoreDNS is running at https://127.0.0.1:41691/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    You now have a working kind-based Kubernetes cluster.

    The following 'Optional' sections show how to look inside the deployed containers to confirm a full Kubernetes implementation exists and how to create additional single-node kind-based Kubernetes clusters.

    If you don't require the 'Optional' sections, please click the "Next" button until you reach the final section ('Delete the kind Clusters').

(Optional) Verify the Kubernetes Environment

Run a few commands to confirm the deployment of a fully functional Kubernetes cluster.

  1. Use kind to check for the details of any running kind clusters.

    kind get clusters

    Example Output:

    [oracle@ol-node01 ~]$ kind get clusters
    enabling experimental podman provider
    kind

    Confirmation the cluster is running.

    Note: Because Podman is running the kind executable, many commands will include a reference to using the 'experimental podman provider', like this: enabling experimental podman provider.

  2. Check what Podman containers are running.

    podman ps -a

    Example Output:

    [oracle@ol-node01 ~]$ podman ps -a
    CONTAINER ID  IMAGE                                                                                           COMMAND              
    CREATED            STATUS                        PORTS                      NAMES
    7b27757cb244  quay.io/podman/hello:latest                                                                     /usr/local/bin/po...  About an hour ago  Exited (0) About an hour ago                             inspiring_allen
    fb2bb3f1e6d9  docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72                        17 minutes ago     Up 17 minutes                 127.0.0.1:41691->6443/tcp  kind-control-plane

    Note: If more nodes exist in the kind cluster then a separate container will be listed for each 'node' of the kind cluster.

  3. Confirm kubectl knows about the newly creted cluster.

    Notice in the previous output that the 'PORTS' heading shows that network traffic on the local machine is being redirected from Port number 41691 to Port number 6443 inside the container, as shown here: 127.0.0.1:41691->6443/tcp. Before proceeding, let's check that kubectl knows about the newly created cluster:

    grep server ~/.kube/config

    Example Output:

    [oracle@ol-node01 ~]$ grep server ~/.kube/config
        server: https://127.0.0.1:41691

    Note: The actual Port number is dynamically assigned each time a kind cluster starts.

  4. Take a quick look at the kind cluster using the kubectl get nodes command.

    kubectl get nodes

    Example Output:

    [oracle@ol-node01 ~]$ kubectl get nodes
    NAME                 STATUS   ROLES           AGE   VERSION
    kind-control-plane   Ready    control-plane   81s   v1.29.2
  5. Confirm that a fully functional Kubernetes node was started.

    kubectl get pods -A

    Example Output:

    [oracle@ol-node01 ~]$ kubectl get pods -A
    NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
    kube-system          coredns-5d78c9869d-9xjd5                     1/1     Running   0          53s
    kube-system          coredns-5d78c9869d-bmpgs                     1/1     Running   0          53s
    kube-system          etcd-kind-control-plane                      1/1     Running   0          69s
    kube-system          kindnet-vtxz7                                1/1     Running   0          54s
    kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          69s
    kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          69s
    kube-system          kube-proxy-dq4t7                             1/1     Running   0          54s
    kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          69s
    local-path-storage   local-path-provisioner-6bc4bddd6b-z8z55      1/1     Running   0          53s
  6. Confirm that containerd (and not Podman) is running the kind cluster.

    kubectl get nodes -o wide

    Example Output:

    [oracle@ol-node01 ~]$ kubectl get nodes -o wide
    NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
    kind-control-plane   Ready    control-plane   45m   v1.29.2   10.89.0.3     <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-205.149.5.1.el9uek.x86_64   containerd://1.7.13
  7. Get a more in-depth look at the kind cluster.

    kubectl get all -A | more

    Example Output:

    [oracle@ol-node01 ~]$ kubectl get all -A | more
    NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
    kube-system          pod/coredns-5d78c9869d-5b5ft                     1/1     Running   0          18m
    kube-system          pod/coredns-5d78c9869d-h2psz                     1/1     Running   0          18m
    kube-system          pod/etcd-kind-control-plane                      1/1     Running   0          18m
    kube-system          pod/kindnet-b6x9g                                1/1     Running   0          18m
    kube-system          pod/kube-apiserver-kind-control-plane            1/1     Running   0          18m
    kube-system          pod/kube-controller-manager-kind-control-plane   1/1     Running   0          18m
    kube-system          pod/kube-proxy-lpjpj                             1/1     Running   0          18m
    kube-system          pod/kube-scheduler-kind-control-plane            1/1     Running   0          18m
    local-path-storage   pod/local-path-provisioner-6bc4bddd6b-hjs7m      1/1     Running   0          18m
    
    NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  18m
    kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   18m
    
    NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    kube-system   daemonset.apps/kindnet      1         1         1       1            1           kubernetes.io/os=linux   18m
    kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   18m
    
    NAMESPACE            NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    kube-system          deployment.apps/coredns                  2/2     2            2           18m
    local-path-storage   deployment.apps/local-path-provisioner   1/1     1            1           18m
    
    NAMESPACE            NAME                                                DESIRED   CURRENT   READY   AGE
    kube-system          replicaset.apps/coredns-5d78c9869d                  2         2         2       18m
    local-path-storage   replicaset.apps/local-path-provisioner-6bc4bddd6b   1         1         1       18m

(Optional) Check if Kubernetes is Running Inside the Container

The curious user may still be uncertain whether kind has deployed a full-featured Kubernetes cluster inside a container. The following steps outline how to confirm this.

  1. Connect to the container's BASH shell.

    podman exec -it kind-control-plane bash

    Example Output:

    [oracle@ol-node01 ~]$ podman exec -it kind-control-plane bash
    root@kind-control-plane:/#
  2. crtictl is a command-line interface to inspect and debug container runtimes on a Kubernetes node. Use crictl to confirm that all the expected Kubernetes services are present inside the kind container.

    crictl ps

    Example Output:

    root@kind-control-plane:/# crictl ps
    CONTAINER           IMAGE               CREATED             STATE     NAME                      ATTEMPT   POD ID              POD
    c76a6f0931550       ce18e076e9d4b       25 minutes ago      Running   local-path-provisioner    0         4b540b5b2f209       local-path-provisioner-6bc4bddd6b-z8z55
    7e87927b14a75       ead0a4a53df89       25 minutes ago      Running   coredns                   0         4f418c623d824       coredns-5d78c9869d-9xjd5
    e7d5f3489f084       ead0a4a53df89       25 minutes ago      Running   coredns                   0         7c737a7820ccc       coredns-5d78c9869d-bmpgs
    4f4357edf61c1       b0b1fa0f58c6e       25 minutes ago      Running   kindnet-cni               0         9be3ac0e411f8       kindnet-vtxz7
    b785e2d63fb8e       9d5429f6d7697       25 minutes ago      Running   kube-proxy                0         fde29e49f009b       kube-proxy-dq4t7
    4d90a3a20fc04       205a4d549b94d       25 minutes ago      Running   kube-scheduler            0         ded2754976cd9       kube-scheduler-kind-control-plane
    bed955e049597       9f8f3a9f3e8a9       25 minutes ago      Running   kube-controller-manager   0         423ad427221b3       kube-controller-manager-kind-control-plane
    5335909a407cb       c604ff157f0cf       25 minutes ago      Running   kube-apiserver            0         51fb09697ae67       kube-apiserver-kind-control-plane
    051d8db7eac77       86b6af7dd652c       25 minutes ago      Running   etcd                      0         b9e063633caf6       etcd-kind-control-plane
  3. All the expected Kubernetes file structure also exists, confirming Kubernetes is present.

    ls /etc/kubernetes
    ls /etc/kubernetes/manifests
    

    Example Output:

    root@kind-control-plane:/# ls /etc/kubernetes
    admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
    etcd.yaml  kube-apiserver.yaml kube-controller-manager.yaml  kube-scheduler.yaml
  4. Exit out of the container.

    exit

(Optional) Create Another Single-Node kind Cluster

As indicated earlier, creating multiple kind clusters is possible concurrently. The following section shows how to achieve this.

  1. Create another kind cluster.

    Note that it is necessary to use the --name parameter this time because a default cluster using the default (kind-kind) name is present.

    kind create cluster --name newcluster

    Example Output:

    [oracle@ol-node01 ~]$ kind create cluster --name newcluster
    enabling experimental podman provider
    Creating cluster "newcluster" ...
     ✓ Ensuring node image (kindest/node:v1.29.2) 🖼
     ✓ Preparing nodes 📦  
     ✓ Writing configuration 📜 
     ✓ Starting control-plane 🕹️ 
     ✓ Installing CNI 🔌 
     ✓ Installing StorageClass 💾 
    Set kubectl context to "kind-newcluster"
    You can now use your cluster with:
    
    kubectl cluster-info --context kind-newcluster
    
    Thanks for using kind! 😊

    NOTE: This output demonstrates the result if your environment contains the additional emoji font packages. The free lab environment's Luna Desktop does not include these and will show the non-descript blocks.

  2. Confirm a new cluster has started.

    Notice that two clusters should be listed - kind and newcluster.

    kind get clusters

    Example Output:

    [oracle@ol-node01 ~]$ kind get clusters
    enabling experimental podman provider
    kind
    newcluster
  3. Check how many containers exist using Podman.

    podman ps

    Example Output:

    [oracle@ol-node01 ~]$ podman ps
    CONTAINER ID  IMAGE                                                                                           COMMAND     CREATED        STATUS        PORTS                      NAMES
    83707b3b7b4c  docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72              2 minutes ago  Up 2 minutes  127.0.0.1:35735->6443/tcp  kind-control-plane
    fa262f08de76  docker.io/kindest/node@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72              9 minutes ago  Up 9 minutes  127.0.0.1:41691->6443/tcp  newcluster-control-plane
  4. Check how many Kubernetes clusters are in the .kube/config file.

    grep server ~/.kube/config

    Example Output:

    [oracle@ol-node01 ~]$ grep server ~/.kube/config
        server: https://127.0.0.1:35735
        server: https://127.0.0.1:41691

(Optional) Connect to Both Clusters Using the Kubernetes Command Line Tool.

  1. Confirm that kubectl shows both clusters.

    kubectl get nodes

    Example Output:

    [oracle@ol-node01 ~]$ kubectl get nodes
    NAME                       STATUS   ROLES           AGE   VERSION
    newcluster-control-plane   Ready    control-plane   27m   v1.29.2

    Is something amiss? No, this is expected behavior. If more than one cluster is present, use the --context flag to indicate which cluster to connect to.

    kubectl get nodes --context kind-kind

    and

    kubectl get nodes --context kind-newcluster

(Optional) Change the Default Cluster Context

Changing the default cluster context to one of the other cluster contexts is possible.

  1. Confirm which cluster is the current default.

    kubectl config get-contexts

    Example Output:

    [oracle@ol-node01 ~]$ kubectl config get-contexts
    CURRENT   NAME              CLUSTER           AUTHINFO          NAMESPACE
              kind-kind         kind-kind         kind-kind         
    *         kind-newcluster   kind-newcluster   kind-newcluster

    The asterisk (*) indicates the current default context.

  2. Switch to the default context of kind-kind.

    kubectl config use-context kind-kind

    Example Output:

    [oracle@ol-node01 ~]$ kubectl config use-context kind-kind
    Switched to context "kind-kind".
  3. Confirm the default context now points to the kind-kind cluster.

    kubectl config get-contexts

    Example Output:

    [oracle@ol-node01 ~]$ kubectl config get-contexts
    CURRENT   NAME              CLUSTER           AUTHINFO          NAMESPACE
    *         kind-kind         kind-kind         kind-kind         
              kind-newcluster   kind-newcluster   kind-newcluster

Delete the kind Cluster.

Deleting a cluster is as simple as creating it.

  1. Delete the default cluster.

    kind delete cluster

    Example Output:

    [oracle@ol-node01 ~]$ kind delete cluster
    enabling experimental podman provider
    Deleting cluster "kind" ...
    Deleted nodes: ["kind-control-plane"]
  2. (Optional) Delete the second cluster.

    Deleting the second cluster requires using the --name option.

    kind delete cluster --name newcluster
  3. Confirm there are no kind clusters running.

    kind get clusters

    Example Output:

    [oracle@ol-node01 ~]$ kind get clusters
    enabling experimental podman provider
    No kind clusters found.

Summary

That completes our demonstration of how to install and run rootless kind on Podman. However, kind has many more features that go beyond the scope of this lab, such as these:

  • Use different Kubernetes versions
  • Define multiple Control and Worker nodes
  • Setup an Ingress Controller
  • Define a Metallb load balancer
  • Use IPv4 (standard), IPv6 or dual stack clusters
  • Work with local or private registries (registries requiring authentication)
  • Work with an audit policy

Locate more details in the upstream kind documentation .

In the meantime, many thanks for taking the time to try this lab.

For More Information

SSR