Run KubeVirt on Oracle Cloud Native Environment
Introduction
KubeVirt is a virtualization technology to create and manage virtual machines in Kubernetes. Administrators create these virtual machines using the kubectl
command and Kubernetes custom resource definitions (CRDs). As with any container image within Kubernetes, it requires persistent storage to maintain its state. Hence, our need for Rook and Ceph.
Rook is a cloud-native storage orchestrator platform that enables Ceph storage for our Kubernetes cluster. Rook deploys as a Kubernetes operator inside a Kubernetes cluster and automates the tasks to provision and de-provision Ceph-backed persistent storage using the Kubernetes Container Storage Interface (CSI).
While Ceph allows the creation of block and object storage, there also exists a shared file system storage. This type uses a CephFilesystem (CephFS) to mount a shared POSIX (Portable Operating System Interface) compliant folder into one or more pods. This storage type is similar to NFS (Network File System) shared storage or CIFS (Common Internet File System) shared folders.
This tutorial guides users on deploying KubeVirt with Ceph storage managed by Rook on Oracle Cloud Native Environment.
Objectives
At the end of this tutorial, you should be able to do the following:
- Install the Rook operator
- Configure Ceph storage
- Install KubeVirt
- Create and Deploy a VM
Prerequisites
- Running Oracle Cloud Native Environment cluster consisting of an operator node, two control plane nodes, and three worker nodes.
- Each worker node contains an attached unformatted block volume
- An available container registry to store container image VMs.
See the Deploy Oracle Cloud Native Environment tutorial for details on installing Oracle Cloud Native Environment.
Verify the Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys Oracle Cloud Native Environment on the provided nodes. This deployment takes approximately 50-55 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
Open a terminal and connect via ssh to the devops node.
ssh oracle@<ip_address_of_devops_node>
Set the terminal encoding to UTF-8.
On the Terminal menu, click Terminal, Set Encoding, Unicode, UTF-8.
Get a list of Kubernetes nodes.
kubectl get nodes -o wide
Verify the additional block volume exists on the worker nodes.
ssh ocne-worker-01 lsblk -f /dev/sdb
In the free lab environment, the block volume attaches as
sdb
, and theFSTYPE
column appears empty, confirming no file system exists on the disk. Repeat for ocne-worker-02 and ocne-worker-03.
Deploy the Rook Operator
The Rook operator is responsible for deploying, configuring, provisioning, scaling, upgrading, and monitoring Ceph storage within the Kubernetes cluster.
Install the Module
Open a new terminal and connect via ssh to the ocne-operator node.
ssh oracle@<ip_address_of_ocne-operator_node>
Create the Rook operator.
olcnectl module create --environment-name myenvironment --module rook --name myrook --rook-kubernetes-module mycluster
Install the Rook operator.
olcnectl module install --environment-name myenvironment --name myrook
Verify the Module
Switch to the existing terminal session for the devops node.
Verify the Rook operator is running.
kubectl -n rook get pod
-n
is the short option for the--namespace
option.
Example Output:
[oracle@devops-node ~]$ kubectl -n rook get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-69bc6598bb-bqvll 1/1 Running 0 2m41s
Create the Ceph Cluster
A Ceph cluster is a distributed storage system providing file, block, and object storage at scale for our Kubernetes cluster.
View the cluster CSD.
less cluster.yaml
Oracle Cloud Native Environment defaults to placing the Rook operator in the
rook
namespace and pulls the Ceph image from the Oracle Container Registry.The cluster CSD defines three monitor daemons (
mon
) for the Ceph distributed file system to allow for a quorum. These monitor daemons get distributed evenly across the three worker nodes based on the value ofallowMultiplePerNode
set tofalse
.Apply the Ceph cluster configuration.
kubectl apply -f cluster.yaml
Example Output:
[oracle@devops-node ~]$ kubectl apply -f cluster.yaml cephcluster.ceph.rook.io/rook-ceph created
Verify the cluster is running.
watch kubectl -n rook get pod
Example Output:
[oracle@devops-node ~]$ kubectl -n rook get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-fn69v 2/2 Running 0 4m51s csi-cephfsplugin-p9xw2 2/2 Running 0 4m51s csi-cephfsplugin-provisioner-864d9fd857-65tnz 5/5 Running 0 4m51s csi-cephfsplugin-provisioner-864d9fd857-mgzct 5/5 Running 0 4m51s csi-cephfsplugin-xzw9k 2/2 Running 0 4m51s csi-rbdplugin-2nk8n 2/2 Running 0 4m51s csi-rbdplugin-f2nkd 2/2 Running 0 4m51s csi-rbdplugin-ffqkr 2/2 Running 0 4m51s csi-rbdplugin-provisioner-6966cf469c-fjf8h 5/5 Running 0 4m51s csi-rbdplugin-provisioner-6966cf469c-zkjsk 5/5 Running 0 4m51s rook-ceph-crashcollector-ocne-worker-01-84b886c998-v8774 1/1 Running 0 2m49s rook-ceph-crashcollector-ocne-worker-02-699dc4b447-77jwb 1/1 Running 0 2m19s rook-ceph-crashcollector-ocne-worker-03-668dcbc7c6-v6hrs 1/1 Running 0 2m40s rook-ceph-mgr-a-794c487d99-z65lq 1/1 Running 0 2m51s rook-ceph-mon-a-76b99bd5f5-zxk8s 1/1 Running 0 4m19s rook-ceph-mon-b-5766869646-vlj4h 1/1 Running 0 3m24s rook-ceph-mon-c-669fc577bc-xc6tp 1/1 Running 0 3m10s rook-ceph-operator-69bc6598bb-bqvll 1/1 Running 0 22m rook-ceph-osd-0-67ffc8c8dd-brtnp 1/1 Running 0 2m20s rook-ceph-osd-1-7bdb876b78-t5lw8 1/1 Running 0 2m20s rook-ceph-osd-2-8df6d884-c94zl 1/1 Running 0 2m19s rook-ceph-osd-prepare-ocne-worker-01-jx749 0/1 Completed 0 2m29s rook-ceph-osd-prepare-ocne-worker-02-mzrg2 0/1 Completed 0 2m29s rook-ceph-osd-prepare-ocne-worker-03-m7jz7 0/1 Completed 0 2m29s
Wait for the cluster creation to complete and look like the sample output. This action can take 5-10 minutes or longer in some cases. The
STATUS
for each item shows asRunning
orCompleted
.Exit the
watch
command usingCtrl-C
.Confirm deployment of the Ceph cluster.
kubectl -n rook get cephcluster
Example Output:
[oracle@devops-node ~]$ kubectl -n rook get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID rook-ceph /var/lib/rook 3 3m49s Ready Cluster created successfully HEALTH_OK e14b4ffc-3491-49a5-82b3-fee488fb3838
Check the State of the Ceph Cluster
The Rook toolbox is a container built with utilities to help debug and test Rook.
View the toolbox CSD.
less toolbox.yaml
The toolbox CSD defines a single
replica
or instance of the Ceph container to deploy.Apply the tools Pod Deployment.
kubectl apply -f toolbox.yaml
Example Output:
[oracle@devops-node ~]$ kubectl apply -f toolbox.yaml deployment.apps/rook-ceph-tools created
Verify the tools Pod successfully deploys.
kubectl -n rook rollout status deploy/rook-ceph-tools
Example Output:
[oracle@devops-node ~]$ kubectl -n rook rollout status deploy/rook-ceph-tools deployment "rook-ceph-tools" successfully rolled out
View the status of the Ceph cluster.
kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status
Example Output:
[oracle@devops-node ~]$ kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status cluster: id: 8a12ac76-0e2e-48cc-b0cf-1498535a1c3c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 4m) mgr: a(active, since 4m) osd: 3 osds: 3 up (since 3m), 3 in (since 4m) data: pools: 1 pools, 1 pgs objects: 2 objects, 577 KiB usage: 65 MiB used, 150 GiB / 150 GiB avail pgs: 1 active+clean
The output shows that the Ceph cluster reaches quorum and is active and healthy once completing the deployment.
Create the Ceph Filesystem Storage
View the Filesystem CSD.
less filesystem.yaml
The CSD creates the metadata pool and a single data pool, each with a replication of three. For more information, see the creating shared filesystems in the upstream documentation.
Apply the Ceph Filesystem configuration.
kubectl apply -f filesystem.yaml
Confirm the Filesystem Pod is running.
kubectl -n rook get pod -l app=rook-ceph-mds
The
mds
pods monitor the file system namespace and show aSTATUS
ofRunning
when done configuring the file system.Check the status of the Filesystem and the existence of the
mds
service.kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status
Example Output:
[oracle@devops-node ~]$ kubectl -n rook exec -it deploy/rook-ceph-tools -- ceph status cluster: id: c83b0a5a-30d4-42fd-a28c-fc68a605a23d health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 39m) mgr: a(active, since 38m) mds: 1/1 daemons up, 1 hot standby osd: 3 osds: 3 up (since 38m), 3 in (since 38m) data: volumes: 1/1 healthy pools: 3 pools, 65 pgs objects: 24 objects, 579 KiB usage: 68 MiB used, 150 GiB / 150 GiB avail pgs: 65 active+clean io: client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
Notice the
mds
shows one daemonup
and another inhot standby
.View the StorageClass CSD.
less storageclass.yaml
Note that we set the
provisioner
prefix to match the Rook operator namespace ofrook
.Provision the Storage.
kubectl apply -f storageclass.yaml
Once we create the storage, it's ready for Kubernetes deployments to consume.
Deploy KubeVirt
Install the Module
Switch to the ocne-operator terminal session.
Create the KubeVirt module.
olcnectl module create --environment-name myenvironment --module kubevirt --name mykubevirt --kubevirt-kubernetes-module mycluster
Install the KubeVirt module.
olcnectl module install --environment-name myenvironment --name mykubevirt
Verify the Module
Switch to the devops node terminal session.
Verify the KubeVirt deployments are running in the
kubevirt
namespace.watch kubectl get deployments -n kubevirt
Example Output:
[oracle@devops-node ~]$ kubectl get deployments -n kubevirt NAME READY UP-TO-DATE AVAILABLE AGE virt-api 2/2 2 2 5m16s virt-controller 2/2 2 2 4m50s virt-operator 2/2 2 2 5m44s
Wait for the kubevirt deployment to complete and look like the sample output.
Exit the
watch
command usingCtrl-C
.Install the
virtctl
command line tool.This utility allows access to the virtual machine's serial and graphical consoles, as well as convenience to these features:
- Starting and stopping the virtual machine
- Live migrations
- Uploading virtual machine disk images
sudo dnf install -y virtctl
Build a Virtual Machine Container Image
KubeVirt can pull a containerized image from a container registry when deploying virtual machine instances. These containerdisks
should be based on scratch
and have the qcow2 disk placed into the /disk
directory of the container readable by the qemu user, which has a UID of 107. The scratch
image is the smallest image for containerization and doesn't contain any files or folders.
Download the Oracle Linux cloud image in QCOW format.
curl -JLO https://yum.oracle.com/templates/OracleLinux/OL9/u3/x86_64/OL9U3_x86_64-kvm-b211.qcow
Create a Containerfile to build a Podman image from the QCOW image.
cat << EOF > Containerfile FROM scratch ADD --chown=107:107 OL9U3_x86_64-kvm-b211.qcow /disk/ EOF
Build the image with Podman.
podman build . -t oraclelinux-cloud:9.2-terminal
Where:
oraclelinux9-cloud
is the image name9.2-terminal
is the image tag, where9.2
is the release version andterminal
indicates the image is CLI only.
Verify the image exists on the local server.
podman images
Example Output:
[oracle@devops-node ~]$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/oraclelinux-cloud 9.2-terminal 0d96b825b3d4 About a minute ago 561 MB
Gather the Oracle Container Registry Repository Credentials
The tables in this section provide example values we'll use in subsequent steps in this lab. The fra
example is the region key for Germany Central (Frankfurt) region. If your region is US East (Ashburn), the region key is iad
. Refer to the Regions and Availability Domains documentation for a complete table listing available region keys.
Registry Data | Lab placeholder | Notes |
---|---|---|
REGISTRY_TYPE | private | Displayed in the repository info panel as "Access" |
REPOSITORY_NAME | demo/oraclelinux-cloud | Displayed in the "Repositories and images" list-of-values |
OCIR_INSTANCE | fra.ocir.io | Use <region>.ocir.io |
In the free lab environment, we configure the repository as private and with the name demo/oraclelinux-cloud
.
See Pushing Images Using the Docker CLI if needed. This procedure describes the login process required to push images to the Container Registry using the CLI.
Gather your login details.
You'll need a username and authentication token to access the container registry. The free lab environment provides these details on the Luna Lab tab of the Luna Lab page. The table shows examples of these values.
Credential Lab placeholder LUNA_USERNAME luna.user@14ad03fa-49d8-4e1b-b934-bb043f9db4b9
OCIR_USERNAME oracleidentitycloudservice/luna.user@14ad03fa-49d8-4e1b-b934-bb043f9db4b9
LUNA_TOKEN 7Q9jSeNf7gMA:q>pKPh;
Create environment variables similar to those below using the gathered credentials.
$ export LUNA_USERNAME="<luna_ephemeral_account_username>" $ export OCIR_USERNAME="oracleidentitycloudservice/$LUNA_USERNAME" $ export LUNA_TOKEN="<luna_oci_auth_token>"
Gather your Namespace and OCIR instance.
The Resources tab in the Luna Lab page lists the namespace in the free lab environment. The table shows an example of this value.
Credential Lab placeholder OCIR_NAMESPACE frn7gzeg0xzn
OCIR_INSTANCE fra.ocir.io
Create environment variables similar to those below using the gathered items.
$ export OCIR_NAMESPACE="<luna_container_registry_namespace>"
Create environment variables that we'll use in the
podman login
command.export USER="$OCIR_NAMESPACE/$OCIR_USERNAME" export TOKEN="$LUNA_TOKEN"
Login to the container registry.
podman login -u $USER -p $TOKEN fra.ocir.io --verbose
The
--verbose
flag shows where podman creates the auth file for this login. We'll use this information later in the lab.
Push the Virtual Machine Image
In this example, Oracle Container Registry stores the final repository URIs as:
docker://OCIR_INSTANCE/REGISTRY_NAMESPACE/REPOSITORY_NAME/IMAGE:TAG
Podman can push local images to remote registries without tagging the image beforehand.
Push the local
oraclelinux-cloud:9.2-terminal
image.podman push oraclelinux-cloud:9.2-terminal docker://fra.ocir.io/$OCIR_NAMESPACE/demo/oraclelinux-cloud:9.2-terminal
Example Output:
[oracle@devops-node ~]$ podman push oraclelinux-cloud:9.2-terminal docker://fra.ocir.io/frn7gzeg0xzn/demo/oraclelinux-cloud:9.2-terminal Getting image source signatures Copying blob ff65b0a12df1 done Copying config 5891207960 done Writing manifest to image destination Storing signatures
Create a Kubernetes Secret Based on the Registry Credentials
Per the Kubernetes upstream documentation, a Secret is an object that contains a small amount of sensitive data, such as a password, a token, or a key. This Secret holds the credentials required to pull the container image from the registry.
Important: The Secret obscures the data using base64 encoding and does not encrypt it. Therefore, anyone with API access or the ability to create a Pod in a namespace can access and decode the credentials.
See Information security for Secrets in the upstream documentation for more details.
Create the OCIR credentials Secret.
kubectl create secret docker-registry ocirsecret --docker-server=fra.ocir.io --docker-username=$USER --docker-password=$TOKEN --docker-email=jdoe@example.com
See Pulling Images from Registry during Deployment in the Oracle Cloud Infrastructure documentation for more details.
Inspect the Secret
kubectl get secret ocirsecret --output=yaml
Create a Virtual Machine with Persistent Storage
Kubevirt allows associating a PersistentVolumeClaim to a VM disk in either filesystem
or block
mode. In the free lab environment, we'll use filesystem
mode. Kubevirt requires placing a disk named disk.img
in the root of the PersistentVolumeClaim's filesystem and owned by the user-id 107
. If we do not create this in advance, Kubevirt will make it at deployment time. See Kubevirt's upstream persistentVolumeClaim documentation for more details.
View the PersistentVolumeClaim CSD file.
less pvc.yaml
The PVC CSD defines a read-write-many volume of 1 Gbs from our Ceph Filesystem storage.
Apply the PersistentVolumeClaim configuration.
kubectl apply -f pvc.yaml
View the VirtualMachine CSD file.
less vm.yaml
Replace the placeholder
OCIR_NAMESPACE
in the file with the free labs OCIR_NAMESPACE.sed -i 's/OCIR_NAMESPACE/'"$OCIR_NAMESPACE"'/g' vm.yaml
Generate the cloud-config's user data.
cat << EOF > cloud-config-script #cloud-config system_info: default_user: name: opc ssh_authorized_keys: - $(cat /home/oracle/.ssh/id_rsa.pub) users: - default - name: oracle lock_password: true sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - $(cat /home/oracle/.ssh/id_rsa.pub) packages: - git EOF
Create a Secret containing the cloud-init user data.
Storing the user data in a Secret allows for easy configuration sharing across multiple virtual machines. The Secret requires using a key with the name
userdata
.kubectl create secret generic vmi-userdata-secret --from-file=userdata=cloud-config-script
Deploy the VirtualMachine.
kubectl apply -f vm.yaml
Check on the VirtualMachine creation.
kubectl get vm
Repeat the command until you see the
STATUS
change toRunning
.
Verify the Virtual Machine Creation and the Persistent Volume Storage
SSH into the VM.
virtctl ssh oracle@ol9-nocloud
Get a list of block devices within the VM.
lsblk
The 1 Gbs PVC appears as the
/dev/vdb
device.Format and mount the PVC disk.
echo ';' | sudo sfdisk /dev/vdb sudo mkfs.xfs /dev/vdb1 sudo mkdir /u01 sudo mount /dev/vdb1 /u01
Create a file and confirm it exists on the persistent disk.
sudo touch /u01/SUCCESS sudo ls -l /u01/
Disconnect from the VM.
exit
Delete the VM and remove its public key fingerprint from the known_hosts file.
kubectl delete vm ol9-nocloud ssh-keygen -R vmi/ol9-nocloud.default -f ~/.ssh/kubevirt_known_hosts
Using
virtctl
creates a defaultkubevirt_known_hosts
file separate from theknown_hosts
file ssh generates. Thessh-keygen
command's-R
option removes the public key fingerprint associated with the VM hostname, while the-f
option points to the customknown_hosts
file.Confirm the removal of the VM.
kubectl get vm
The output shows there are no resources found.
Recreate the VM.
kubectl apply -f vm.yaml
Run
kubectl get vm
and wait for theSTATUS
to report asRunning
.Mount the block device and confirm the data on the PVC persists.
virtctl ssh oracle@ol9-nocloud -c "sudo mkdir /u01; sudo mount /dev/vdb1 /u01; sudo ls -al /u01"
The output shows the
SUCCESS
file confirming that the data persists on the disk image stored on the Ceph Filesystem based PVC.
Summary
That completes the demonstration detailing the creation of a VM by KubeVirt that leverages Ceph Filesystem storage generated using Oracle Cloud Native Environment's Rook module.