Use Persistent Volumes and Persistent Volume Claims with Oracle Cloud Native Environment

0
0
Send lab feedback

Use Persistent Volumes and Persistent Volume Claims with Oracle Cloud Native Environment

Introduction

Because container-based applications are stateless by default, any changes to files during the container lifetime are lost. However, if your application is stateful, the ablility to persist any changes to the filesystem becomes relevant. It is this requirement that has led to Kubernetes supporting many types of volumes. Many volume types that you may have encountered already are ephemeral (they only exist during the pod lifetime). Kubernetes also supports persistent volumes, which persist data stored on them beyond the pod lifetime. There are several Persistent Volume types supported by Kubernetes, and this Lab will demonstrate one of the simplest - the local type, which stores data on devices stored locally on one of the cluster nodes.

This tutorial shows how to create and use Persistent Volumes and Persistent Volume Claims with Oracle Cloud Native Environment. Both Persistent Volumes and Persistent Volume Claims work together to provide persistence to any container-based applications deployed onto Oracle Cloud Native Environment. You will start by covering how to use Persistent Volumes initially, and then cover the use of Persistent Volume Claims.

Objectives

In this lab, you will learn:

  • The difference between a Persistent Volume (PV) and a Persistent Volume Claim (PVC)
  • How to use Persistent Volumes and Persistent Volume Claims with Oracle Cloud Native Environment.

Prerequisites

  • 3 Oracle Linux systems to use as:

    • Operator node (ocne-operator-01)
    • Kubernetes control plane node (ocne-control-01)
    • Kubernetes worker node (ocne-worker-01)
  • Each system should have the latest Oracle Linux 8 (x86_64) installed

  • This environment is pre-configured with:

    • An Oracle user account (used during the installation) with sudo access
    • Key-based SSH, also known as password-less SSH, between the hosts
    • Installation of Oracle Cloud Native Environment and Oracle Cloud Infrastructure Cloud Controller Manager (oci-ccm) module

Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
  3. Change into the working directory.

    cd linux-virt-labs/ocne
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yaml
  5. Deploy the lab environment.

    ansible-playbook create_instance.yaml -e ansible_python_interpreter="/usr/bin/python3.6" -e use_oci_ccm=true

    The free lab environment requires the extra variable ansible_python_interpreter because it installs the RPM package for the Oracle Cloud Infrastructure SDK for Python. The location for this package's installation is under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. The Oracle Cloud Native Environment installation is complete at this stage of the playbook, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys.

Confirm the Storage Class Used on your Cluster

There are different storage classes available in different environments. The free lab environment is using Oracle Cloud Storage and has been pre-configured to work with oci-ccm (The Oracle Cloud Infrastructure Cloud Controller Manager ).

  1. Open a terminal and connect via SSH to the ocne-control-01 node.

    ssh oracle@<ip_address_of_ocne-control-01>
  2. Confirm the storage class available in the free lab environment.

    kubectl get storageclass

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get storageclass
    NAME               PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    oci                oracle.com/oci                    Delete          Immediate              false                  6m
    oci-bv (default)   blockvolume.csi.oraclecloud.com   Delete          WaitForFirstConsumer   true                   6m
    oci-bv-encrypted   blockvolume.csi.oraclecloud.com   Delete          WaitForFirstConsumer   true                   6m

    Note: If you are using your own Kubernetes cluster it is important to confirm what Storage Classes are available and then modify the value in your Persistent Volume/Persistent volume Claim for them to work. In the free lab environment, the value to use is oci-bv.

Create a Persistent Volume

Enabling a container to persist its data between sessions is a two-stage process. Firstly the disk resource has to be defined (the Persistent Volume), then it needs to be assigned to the container (Persistent Volume Claim). This section will complete the first of these.

  1. Create the Persistent Volume Claim YAML file.

    cat << 'EOF' > ~/pv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: test-pv
    spec:
      storageClassName: oci-bv
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/tmp/test-pv"
    EOF
    

    The principal fields to note are:

    • accessModes: the value used here (ReadWriteOnce) means the volume can be mounted as ReadWrite by only one Pod, whilst being mounted by any other Pods on the same node in Read mode only (for more information look here ).
    • storage: 1Gi defines the size of the Persistent Volume. This example is 1Gb
    • The storageClassName variable is set to oci-bv
    • hostPath defines the 'type' of Persistent Volume. The hostPath value used here will ONLY work on a single node, making it only suitable for testing (more information is available here ). The hostPath value allows the Persistent Volume to emulate networked storage. In this example, the hostPath variable configures the storage class to store the Persistent Volume data at /tmp/test-pv. (Note: If you are using a different storage driver you may need to use a different provisioner from hostPath.)
  2. Deploy the Persistent Volume.

    kubectl apply -f pv.yaml

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl apply -f pv.yaml
    persistentvolume/test-pv created
  3. Confirm the Persistent Volume deployed.

    kubectl get pv

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get pv
    NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    test-pv   1Gi        RWO            Retain           Available           oci-bv                  2m2s
  4. (Optional) You can also look at more detail about the Persistent Volume.

    kubectl describe pv/test-pv

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl describe pv/test-pv
    Name:            test-pv
    Labels:          <none>
    Annotations:     <none>
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    oci-bv
    Status:          Available
    Claim:           
    Reclaim Policy:  Retain
    Access Modes:    RWO
    VolumeMode:      Filesystem
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:         
    Source:
        Type:          HostPath (bare host directory volume)
       Path:          /tmp/test-pv
        HostPathType:  
    Events:            <none>

    Interesting Information: This illustrates how the fields defined in the YAML file map to the Persistent Volume. Did you spot some unused fields, such as Node Affinity ? They are used to test functionality if your install has multiple Worker nodes, which is out of scope for this lab.

Create a Persistent Volume Claim

A Persistent Volume cannot be used in isolation. Instead, a Persistent Volume Claim must be defined and created. Next, you will complete the second of the two steps.

  1. Create the Persistent Volume Claim YAML file.

    The Persistent Volume Claim enables the deployed application to request physical storage.

    cat << 'EOF' > ~/pvc.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test-pvc
    spec:
      storageClassName: oci-bv
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    EOF
    
  2. Create the Persistent Volume Claim.

    kubectl apply -f pvc.yaml

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl apply -f pvc.yaml
    persistentvolumeclaim/test-pvc created

    This command instructs Kubernetes to look for a Persistent Volume that matches the PVC claims requirements. Kubernetes looks to match the storageClass type requested and, if located, binds this claim to the identified volume.

  3. Confirm the Persistent Volume Claim created.

    kubectl get pvc/test-pvc

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get pvc/test-pvc
    NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    test-pvc   Pending                                      oci-bv         1m43s
  4. (Optional) Retrieve more detail related to the Persistent Volume Claim.

    kubectl describe pvc/test-pvc

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl describe pvc/test-pvc 
    Name:          test-pvc
    Namespace:     default
    StorageClass:  oci-bv
    Status:        Pending
    Volume:        
    Labels:        <none>
    Annotations:   <none>
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      
    Access Modes:  
    VolumeMode:    Filesystem
    Used By:       <none>
    Events:
      Type    Reason                Age                   From                         Message
      ----    ------                ----                  ----                         -------
     Normal  WaitForFirstConsumer  11s (x22 over 5m22s)  persistentvolume-controller  waiting for first consumer to be created before binding

    All that is left now is to test that this delivers persistence by creating a new deployment that uses this Persistent Volume Claim.

Create a Stateful Application Deployment

Next, you will create a deployment descriptor YAML file to deploy a stateful application using the Persistent Volume Claim.

  1. Create a Kubernetes new deployment YAML file to deploy Nginx on Oracle Cloud Native Environment.

    cat << 'EOF' > ~/pod.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: pvc-pod
    spec:
      containers:
        - name: pvc-pod-container
          image: nginx:latest
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: "/tmp/test-pv"
              name: data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: test-pvc
    EOF
    

    Where:

    • The volumeMounts variable defines the path mounted inside the deployed Pod. This example is mounted as tmp/test-pv
    • The volumes variable defines which Persistent Volume Claim the Pod deployment should use. This example uses test-pvc
  2. Deploy the Pod.

    kubectl apply -f pod.yaml

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl apply -f pod.yaml 
    pod/pvc-pod created
  3. Confirm the Pod deployed and is running.

    kubectl get pod

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get pod
    NAME      READY   STATUS    RESTARTS   AGE
    pvc-pod   1/1     Running   0          1m

    Notice that the STATUS column confirms that the Pod deployed and should be running. The next stage is to test that the Persistent Volume is mounted and accessible.

  4. (Optional) If the Pod deploys successfully, the Persistent Volume Claim should have claimed the Persistent Volume.

    kubectl get pv
    kubectl get pvc
    

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get pv
    NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
    test-pv   1Gi        RWO            Retain           Bound    default/test-pvc   oci-bv                  20m
    [oracle@ocne-control-01 ~]$ kubectl get pvc
    NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    test-pvc   Bound    test-pv   1Gi        RWO            oci-bv         20m

    Notice that the STATUS column in the Persistent Volume and the Persistent Volume Claim confirm they are Bound. Additionally, the CLAIM column for the Persistent Volume shows the Persistent Volume Claim (test-pvc) you created earlier has Bound to the Persistent Volume.

Log into the deployed Pod and test the Persistent Volume

These steps show how to get a session shell into the running container and test that writing data stores that data outside of the Pod. Then you will delete the Pod and recreate it. If it works, the data will remain accessible to the freshly deployed Pod.

  1. Get a shell inside the Pod.

    kubectl exec -it pod/pvc-pod -- /bin/bash

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl exec -it pod/pvc-pod -- /bin/bash
    oot@pvc-pod:/# 

    Note: Remember you are now using the $SHELL environment inside the Pod you just deployed.

  2. Check whether there are any files in the mounted volume yet.

    ls /tmp/test-pv

    As expected, nothing returns.

  3. Write a file to the volume.

    echo "bar" > /tmp/test-pv/foo
  4. Confirm the file shows in the volume.

    ls /tmp/test-pv

    Example Output:

    root@pvc-pod:~# ls /tmp/test-pv
    foo
  5. Exit the Pod shell environment.

    exit

    Example Output:

    root@pvc-pod:~# exit
    exit
    [oracle@ocne-control-01 ~]$ 

    The file you just created in the volume is now stored independently of the Pod. Next, you will confirm this by deleting the Pod and recreating it.

  6. Delete the Pod.

    kubectl delete pod/pvc-pod

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl delete pod/pvc-pod 
    pod "pvc-pod" deleted
  7. Deploy the Pod again.

    kubectl apply -f pod.yaml

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl apply -f pod.yaml
    pod/pvc-pod created
  8. Shell into the Pod again.

    kubectl exec -it pod/pvc-pod -- /bin/bash
  9. Confirm the foo file you created earlier is still accessible.

    cat /tmp/test-pv/foo

    Example Output:

    root@pvc-pod:/# cat /tmp/test-pv/foo
    bar

Where is the Data Written to?

Important Note: Unless otherwise stated, all the following steps run on the ocne-worker-01 node.

A primary reason to store files outside of any container is because the information they contain is valuable to your business, or you personally. Therefore, you need to know where the data is located in order to manage that information. For example, to ensure backups work. The following steps will show how to find the data on the free lab deployment.

Because the Pod deployment executes on the ocne-worker-01 node, any file(s) are saved to the local disk on that node (ocne-worker-01). The next steps show how to confirm they are stored there.

  1. Open a new terminal and connect via SSH to the ocne-worker-01 node.

    ssh oracle@<ip_address_of_ocne-worker-01>
  2. Check the /tmp/test-pv directory.

    ls /tmp/test-pv

    Example Output:

    [oracle@ocne-worker-01 ~]$ ls /tmp/test-pv/
    foo
  3. Confirm the foo file holds the same content you entered from inside the Pod.

    cat /tmp/test-pv/foo

    Example Output:

    [oracle@ocne-worker-01 ~]$ cat /tmp/test-pv/foo 
    bar
  4. Success! This confirms that foo file you created from within the Pod was saved externally, and the Persistent Volume and Persistent Volume Claim provided the mechanism for the Pod to do it.

Summary

This concludes the walkthrough by introducing Persistent Volumes and Persistent Volume Claims and how to use them to provide some flexibility over how you manage your data.

For More Information

SSR