Use Gluster with Oracle Cloud Native Environment
Introduction
The Gluster Container Storage Interface module can be used to set up statically or dynamically provisioned persistent storage for Kubernetes applications using Gluster Storage for Oracle Linux in Oracle Cloud Native Environment.
The Gluster Container Storage Interface module creates a Kubernetes Glusterfs StorageClass provisioner to access existing storage on Gluster. Kubernetes uses the Glusterfs plug-in to provision Gluster volumes for use as Kubernetes PersistentVolumes . The Oracle Cloud Native Environment Platform API Server communicates with the Heketi API to provision and manage Gluster volumes using PersistentVolumeClaims . The Gluster volumes can be automatically destroyed when the PersistentVolumeClaims are deleted.
In this example, we will create an integrated system where Kubernetes worker nodes provide persistent storage using Gluster on Oracle Cloud Native Environment.
Objectives
In this lab you will learn how to install and configure Gluster Storage for Oracle Linux on Oracle Cloud Native Environment to create storage volumes for Kubernetes applications.
Prerequisites
The host systems to perform the steps in this tutorial are listed in this section. To be successful requires:
5 Oracle Linux systems to use as:
- Operator node (ocne-operator)
- Kubernetes control plane node (ocne-control01)
- 3 Kubernetes worker nodes (ocne-worker01, ocne-worker02, ocne-worker03)
Systems should have a minimum of the following:
- Latest Oracle Linux 8 (x86_64) installed and running the Unbreakable Enterprise Kernel Release 6 (UEK R6)
The pre-configured setup on these systems is:
- An
oracle
user account withsudo
privileges - Passwordless SSH between each node
- Additional block storage attached to each worker node for Gluster
- Oracle Cloud Native Environment installed and configured
- An
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
This lab involves multiple systems, each of which requires different steps to be performed. It is recommended to start by opening five terminal windows or tabs and connecting to each node. This avoids you having to repeatedly log in and out. The nodes are:
- ocne-control01
- ocne-operator
- ocne-worker01
- ocne-worker02
- ocne-worker03
Important: The free lab environment deploys a fully installed Oracle Cloud Native Environment across the provided nodes. This deployment takes approximately 25-30 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
Open a terminal and connect via ssh to each node.
ssh oracle@<ip_address_of_ol_node>
Validate the Kubernetes Environment
(On the ocne-control01 node) Verify
kubectl
works.kubectl get nodes
Example Output:
[oracle@ocne-control01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control01 Ready control-plane,master 16m v1.22.8+1.el8 ocne-worker01 Ready <none> 15m v1.22.8+1.el8 ocne-worker02 Ready <none> 15m v1.22.8+1.el8 ocne-worker03 Ready <none> 15m v1.22.8+1.el8
Configure the Worker Nodes
Install and configure the Gluster service.
In this section, complete each of the tasks on each worker:
- ocne-worker01
- ocne-worker02
- ocne-worker03
Note: This approach avoids repetition in the documentation because the required action are identical on each node.
Install the Gluster yum repository configuration.
sudo dnf install -y oracle-gluster-release-el8
Install the Gluster software.
sudo dnf install -y glusterfs-server glusterfs-client
Configure the firewall to allow traffic on the ports that are specifically used by Gluster.
sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/24 sudo firewall-cmd --permanent --zone=trusted --add-service=glusterfs sudo firewall-cmd --reload
Enable Gluster encryption.
Set up the Gluster environment with TLS to encrypt management traffic between Gluster nodes. Rather than creating new certificates, re-use the x.509 certificates provided by Oracle Cloud Native Environment.
sudo cp /etc/olcne/configs/certificates/production/ca.cert /etc/ssl/glusterfs.ca sudo cp /etc/olcne/configs/certificates/production/node.key /etc/ssl/glusterfs.key sudo cp /etc/olcne/configs/certificates/production/node.cert /etc/ssl/glusterfs.pem sudo touch /var/lib/glusterd/secure-access
Enable and start the Gluster service.
sudo systemctl enable --now glusterd.service
Configure the Control Plane Node
Configure Heketi, which will use the Gluster nodes to provision storage.
In this section, complete each of the tasks on the control plane node, ocne-control01.
Install the Gluster yum repository configuration.
sudo dnf install -y oracle-gluster-release-el8
Install the Heketi software.
sudo dnf install -y heketi heketi-client
Allow the required port through the firewall for Heketi.
sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/24 sudo firewall-cmd --permanent --zone=trusted --add-port=8080/tcp sudo firewall-cmd --reload
Create the SSH authentication key for Heketi to use when communicating with worker nodes.
sudo ssh-keygen -m PEM -t rsa -b 4096 -q -f /etc/heketi/heketi_key -N '' sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker01 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker02 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo cat /etc/heketi/heketi_key.pub | ssh -t -o StrictHostKeyChecking=no ocne-worker03 "sudo tee -a /root/.ssh/authorized_keys" > /dev/null 2>&1 sudo chown heketi:heketi /etc/heketi/heketi_key*
Configure the heketi.json file.
Warning: The
sed
commands below only work the first time running against an unmodified heketi.json file.Make a back up of the heketi.json file.
sudo cp /etc/heketi/heketi.json /etc/heketi/heketi.json.bak
Update the use_auth section to true.
sudo sed -i 's/"use_auth": false/"use auth": true/' /etc/heketi/heketi.json
Define a passphrase for the
admin
anduser
accounts.sudo sed -i '0,/"My Secret"/{s/"My Secret"/"Admin Password"/}' /etc/heketi/heketi.json sudo sed -i '0,/"My Secret"/{s/"My Secret"/"User Password"/}' /etc/heketi/heketi.json
Change the Glusterfs executor from mock to ssh.
sudo sed -i 's/"executor": "mock"/"executor": "ssh"/' /etc/heketi/heketi.json
Define the sshexec properties.
sudo sed -i 's+"path/to/private_key"+"/etc/heketi/heketi_key"+' /etc/heketi/heketi.json sudo sed -i 's+"sshuser"+"root"+' /etc/heketi/heketi.json sudo sed -i 's+"Optional: ssh port. Default is 22"+"22"+' /etc/heketi/heketi.json sudo sed -i 's+"Optional: Specify fstab file on node. Default is /etc/fstab"+"/etc/fstab"+' /etc/heketi/heketi.json
Enable the service.
sudo systemctl enable --now heketi.service
Validate Heketi is working.
curl -w "\n" localhost:8080/hello
Example Output:
[oracle@ocne-operator ~]$ curl localhost:8080/hello Hello from Heketi.
Create a Heketi topology file.
This file declares the hostnames to use, the host IP addresses, and the block devices available to Gluster.
cat << 'EOF' | sudo tee /etc/heketi/topology.json > /dev/null { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "ocne-worker01" ], "storage": [ "10.0.0.160" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "ocne-worker02" ], "storage": [ "10.0.0.161" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] }, { "node": { "hostnames": { "manage": [ "ocne-worker03" ], "storage": [ "10.0.0.162" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] } ] } ] } EOF
Load the Heketi topology file.
Use the username and password defined in step 5 of this section.
heketi-cli --user admin --secret "Admin Password" topology load --json=/etc/heketi/topology.json
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" topology load --json=/etc/heketi/topology.json Creating cluster ... ID: 523081a5a77aa16ef0ea98d9be5720fd Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node ocne-worker01 ... ID: a3213791a722b8a8843e595fd5f631f4 Adding device /dev/sdb ... OK Creating node ocne-worker02 ... ID: 403ff6243ac8e3f7f3bf99e9532f18f6 Adding device /dev/sdb ... OK Creating node ocne-worker03 ... ID: 60e28bdc3fa17d76846aa5e8ea7c25e5 Adding device /dev/sdb ... OK
List the nodes of known clusters.
heketi-cli --secret "Admin Password" --user admin node list
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --secret "Admin Password" --user admin node list Id:403ff6243ac8e3f7f3bf99e9532f18f6 Cluster:523081a5a77aa16ef0ea98d9be5720fd Id:60e28bdc3fa17d76846aa5e8ea7c25e5 Cluster:523081a5a77aa16ef0ea98d9be5720fd Id:a3213791a722b8a8843e595fd5f631f4 Cluster:523081a5a77aa16ef0ea98d9be5720fd
Install the Gluster Container Storage Interface Module
Gluster and Heketi are now set up on Oracle Cloud Native Environment and ready to use with the Gluster Container Storage Interface module. In this section, you install and validate the Gluster Container Storage Interface module.
On the ocne-operator node:
Add a Helm and Gluster module to the Oracle Cloud Native Environment configuration file.
cat << 'EOF' | sudo tee -a ~/myenvironment.yaml > /dev/null - module: helm name: myhelm args: helm-kubernetes-module: mycluster - module: gluster name: mygluster args: gluster-helm-module: myhelm helm-kubernetes-module: mycluster gluster-server-url: http://ocne-control01:8080 EOF
Create the modules.
olcnectl module create --config-file myenvironment.yaml
Validate the modules.
olcnectl module validate --config-file myenvironment.yaml
Install the modules.
olcnectl module install --config-file myenvironment.yaml
Note: This may take a few minutes to complete.
Show the installed modules.
olcnectl module instances --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml INSTANCE MODULE STATE mycluster kubernetes installed mygluster gluster installed myhelm helm installed ocne-control01:8090 node installed ocne-worker01:8090 node installed ocne-worker02:8090 node installed ocne-worker03:8090 node installed
Create Gluster Volumes
In this section, complete each of the tasks on the control plane node, ocne-control01.
Create some example PersistentVolumeClaims.
for x in {0..5}; do cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-pvc-${x} spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF done
Example Output:
persistentvolumeclaim/gluster-pvc-0 created persistentvolumeclaim/gluster-pvc-1 created persistentvolumeclaim/gluster-pvc-2 created persistentvolumeclaim/gluster-pvc-3 created persistentvolumeclaim/gluster-pvc-4 created persistentvolumeclaim/gluster-pvc-5 created
Verify the PersistentVolumeClaims are dynamically filled by Gluster volumes.
Note: It may take a few moments for these to be assigned.
kubectl get pvc
Example Output:
[oracle@ocne-control01 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gluster-pvc-0 Bound pvc-6f38ff62-aea2-41b1-836a-3fc9796f067f 1Gi RWX hyperconverged 2m20s gluster-pvc-1 Bound pvc-2877b912-f8f0-403a-abc7-5c375d7dcd94 1Gi RWX hyperconverged 2m20s gluster-pvc-2 Bound pvc-9fd2d0e8-266e-4b7a-a38f-28ffb5a9ce53 1Gi RWX hyperconverged 2m20s gluster-pvc-3 Bound pvc-f656153c-af56-4eb5-a3c9-2d718ca0c79c 1Gi RWX hyperconverged 2m19s gluster-pvc-4 Bound pvc-80f7e971-527e-416f-9d72-836c5b831731 1Gi RWX hyperconverged 2m19s gluster-pvc-5 Bound pvc-864de23d-e030-44db-b145-a6e626090d5a 1Gi RWX hyperconverged 2m19s
Create an example Deployment that uses a PersistentVolumeClaim defined above.
cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: run: demo-nginx name: demo-nginx spec: replicas: 1 selector: matchLabels: run: demo-nginx template: metadata: labels: run: demo-nginx spec: containers: - image: nginx name: demo-nginx ports: - containerPort: 80 volumeMounts: - name: demo-nginx-pvc mountPath: /usr/share/nginx/html volumes: - name: demo-nginx-pvc persistentVolumeClaim: claimName: gluster-pvc-1 EOF
Ensure that our example Gluster backed nginx pod has started successfully
kubectl get pod -l run=demo-nginx
Example Output:
[oracle@ocne-control01 ~]$ kubectl get pod -l run=demo-nginx NAME READY STATUS RESTARTS AGE demo-nginx-9b86b6cb-hvdcr 1/1 Running 0 13s
Verify the volume used is Glusterfs
Important: Change the command to the name of the pod identified in the previous step.
kubectl exec demo-nginx-<replace> -ti -- mount -t fuse.glusterfs
Example Output:
[oracle@ocne-control01 ~]$ kubectl exec demo-nginx-9b86b6cb-hvdcr -ti -- mount -t fuse.glusterfs 10.0.0.162:vol_9d2f56d3d5b7b31dc92d7f60e302dbc0 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
At this point, the Kubernetes environment is creating Gluster volumes when a PersistantVolumeClaim is created and deleting them when deleting the PersistantVolumeClaim.
Summary
When making a PersistentVolumeClaim, the Kubernetes API Server running on the operator node will request a volume from Heketi running on the control plane node. Heketi will create a Gluster volume on one of the three Gluster nodes (ocne-worker01, ocne-worker02, or ocne-worker03) and respond to the Kubernetes API server with volume details. When directed to start the pod, the worker will mount the Gluster filesystem and present it to a pod.
Note: that Gluster volumes created by Heketi will not have I/O encryption enabled. The above configuration only enables encryption of management traffic.
(Optional) Enabling TLS in Heketi
When deploying in production, it may be required to encrypt communications between the Kubernetes API server and Heketi. In section
https://luna.oracle.com/api/v1/labs/5455954d-142c-4801-9f34-5946ad19573d/gitlab/tutorial/#s4
In this section, complete each of the tasks on each control plane, ocne-control01.
Copy OCNE certificates to Heketi folder.
sudo cp /etc/olcne/configs/certificates/production/node* /etc/heketi/ sudo chown heketi:heketi /etc/heketi/node*
Update the heketi.json file.
Insert the following after the port definition.
cat << EOF | sudo sed -i '/"port": "8080",/ r /dev/stdin' /etc/heketi/heketi.json "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": true, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "/etc/heketi/node.cert", "_key_file_comment": "Path to a valid private key file", "key_file": "/etc/heketi/node.key", EOF
Example Output:
{ "_port_comment": "Heketi Server Port Number", "port": "8080", "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": true, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "/etc/heketi/node.cert", "_key_file_comment": "Path to a valid private key file", "key_file": "/etc/heketi/node.key", ...
Restart the service.
sudo systemctl restart heketi.service
Trust the example Certificate Authority
sudo cp /etc/olcne/configs/certificates/production/ca.cert /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract
Validate HTTPS Heketi is working
curl -w "\n" https://localhost:8080/hello
Example Output:
Hello from Heketi
(Optional) Delete an Existing StorageClass Object
Note: If a StorageClass is already hyperconverged, updating StorageClass parameters is not permitted. You must delete the StorageClass before continuing.
kubectl delete storageclass hyperconverged
Example Output:
storageclass.storage.k8s.io "hyperconverged" deleted
Create the StorageClass object with a https
resturl
cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hyperconverged annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/glusterfs parameters: resturl: "https://ocne-control01:8080" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-admin" EOF
Example Output:
storageclass.storage.k8s.io/hyperconverged created
To use further
heketi-cli
commands you must first declare the HTTPS urlexport HEKETI_CLI_SERVER=https://ocne-control01:8080
Heketi communications are now encrypted.
(Optional) Example Gluster Output
(Optional) (On control node) Define the Heketi server URL.
If you previously completed the (Optional) Enabling TLS in Heketi step, it is required to declare the updated URL.
export HEKETI_CLI_SERVER=https://ocne-control01:8080
(On control node) List volumes.
heketi-cli --user admin --secret "Admin Password" volume list
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" volume list Id:00c4c28f2e711daa31233d11dc6d7ba2 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_00c4c28f2e711daa31233d11dc6d7ba2 Id:10ec635cb65339542d8abc7b1a066b29 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_10ec635cb65339542d8abc7b1a066b29 Id:30bc4be77b0550a9df43225b858e8ab7 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_30bc4be77b0550a9df43225b858e8ab7 Id:948bbf1668b06424e9b1781a78919bab Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_948bbf1668b06424e9b1781a78919bab Id:9d2f56d3d5b7b31dc92d7f60e302dbc0 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_9d2f56d3d5b7b31dc92d7f60e302dbc0 Id:f6d04bb04c26ab1e937f248e5cfe4130 Cluster:523081a5a77aa16ef0ea98d9be5720fd Name:vol_f6d04bb04c26ab1e937f248e5cfe4130
(On control node) Show volume info.
Note: Change the volume id to the id of one of the volumes identified in the
List volumesstep.https://luna.oracle.com/api/v1/labs/5455954d-142c-4801-9f34-5946ad19573d/gitlab/tutorial/#listvol
heketi-cli --user admin --secret "Admin Password" volume info <replace>
Example Output:
[oracle@ocne-control01 ~]$ heketi-cli --user admin --secret "Admin Password" volume info 00c4c28f2e711daa31233d11dc6d7ba2 Name: vol_00c4c28f2e711daa31233d11dc6d7ba2 Size: 1 Volume Id: 00c4c28f2e711daa31233d11dc6d7ba2 Cluster Id: 523081a5a77aa16ef0ea98d9be5720fd Mount: 10.0.0.161:vol_00c4c28f2e711daa31233d11dc6d7ba2 Mount Options: backup-volfile-servers=10.0.0.162,10.0.0.160 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distribute Count: 1 Replica Count: 3 Snapshot Factor: 1.00
(On any worker node) Show the state of the Gluster volume from a workers perspective.
Note: Change the volume name to the name of one of the volumes identified in the
List volumesstep.https://luna.oracle.com/api/v1/labs/5455954d-142c-4801-9f34-5946ad19573d/gitlab/tutorial/#listvol
sudo gluster volume status <replace>
Example Output:
[oracle@ocne-worker01 ~]$ sudo gluster volume status vol_10ec635cb65339542d8abc7b1a066b29 Status of volume: vol_10ec635cb65339542d8abc7b1a066b29 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.0.0.161:/var/lib/heketi/mounts/vg_ d30f47edef2f0fed556c67c8599f519e/brick_d6b6 9ba067b01110acaafd1b37e8b951/brick 49152 0 Y 97976 Brick 10.0.0.160:/var/lib/heketi/mounts/vg_ 6526c527b6cc74975878c83bbd538a53/brick_ad22 a4e50a73cdeb11f8a11d5715174f/brick 49153 0 Y 98121 Brick 10.0.0.162:/var/lib/heketi/mounts/vg_ df7540be1a1b309f99f29d4451c6f960/brick_d5b1 6c5a34acc54fe9b7132d81b51935/brick 49154 0 Y 98120 Self-heal Daemon on localhost N/A N/A Y 98111 Self-heal Daemon on ocne-worker02.pub.linux virt.oraclevcn.com N/A N/A Y 97993 Self-heal Daemon on 10.0.0.162 N/A N/A Y 98109 Task Status of Volume vol_10ec635cb65339542d8abc7b1a066b29 ------------------------------------------------------------------------------ There are no active volume tasks
Want to Learn More?
- Glusterfs StorageClass
- Gluster: Setting up Transport Layer Security
- Gluster: Setting up Heketi
- Gluster: Setting up Volumes