Install Oracle Cloud Native Environment Using the libvirt Provider
Introduction
Oracle Cloud Native Environment includes a Command Line Interface (CLI) that can manage the life cycle of Kubernetes clusters using OSTree-based container images. Oracle Cloud Native Environment includes several provider types you can use to create a Kubernetes cluster, and this tutorial introduces how to use the ocne cluster
command to create Kernel-based Virtual Machines (KVM) using the libvirt provider.
The libvirt provider is the default Oracle Cloud Native Environment provider and provisions Kubernetes clusters using KVM. Oracle Cloud Native Environment uses the Oracle KVM stack to install libvirt because this version offers many more features for Oracle Linux-based systems.
For more information about Oracle Cloud Native Environment 2, please refer to the current Release Documentation site.
Objectives
In this tutorial, you'll learn to:
- Install the libvirt provider
- Install the Oracle Cloud Native Environment Command Line Interface (CLI)
- Use the libvirt provider to create and manage a Kubernetes cluster
Prerequisites
Minimum of one Oracle Linux instance
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
If you wish to bypass the Install the libvirt Provider step, then run the playbook within the Configure the Oracle Cloud Native Environment step using the
-e ocne_type=libvirt
option rather than-e ocne_type=none
.
Configure the Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne2
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e ocne_type=none
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of the Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install the libvirt Provider
Set up the host to enable Kubernetes cluster creation using the libvirt provider.
Open a terminal and connect via SSH to the ocne instance.
ssh oracle@<ip_address_of_instance>
Install the Ansible package and dependencies.
sudo dnf install -y ansible-core python3-lxml
Create a requirements file for collections.
cat << EOF | tee ~/requirements.yml > /dev/null --- collections: - ansible.posix - community.general - community.crypto - community.libvirt EOF
Install the collections.
ansible-galaxy install -r requirements.yml
Create an Ansible configuration file.
cat << EOF | tee ~/ansible.cfg > /dev/null [defaults] nocows = 1 host_key_checking = false interpreter_python = auto_silent inventory = host EOF
Create an inventory file.
cat << EOF | tee ~/host > /dev/null --- server: hosts: ocne: EOF
Verify you can connect to each host in the inventory.
ansible all -m ping
The output should list each host with a SUCCESS ping: pong response.
Create a playbook to deploy KVM with libvirt.
cat << EOF | tee ~/deploy_kvm.yml > /dev/null - name: Gather host facts hosts: all tasks: - name: Run facts module ansible.builtin.setup: - name: Configure VMs hosts: server become: true tasks: - name: Install Oracle Linux 8 virtualization packages ansible.builtin.dnf: name: - "@virt" - virt-install - virt-viewer - containers-common - cockpit - cockpit-machines state: present when: ansible_distribution == 'OracleLinux' and ansible_distribution_major_version == '8' - name: Install Oracle Linux 9 virtualization packages ansible.builtin.dnf: name: - qemu-kvm - libvirt - virt-install - virt-viewer - containers-common - cockpit - cockpit-machines state: present when: ansible_distribution == 'OracleLinux' and ansible_distribution_major_version == '9' - name: Start and enable Oracle Linux 8 monolithic virtualization services ansible.builtin.systemd: state: started name: libvirtd.service enabled: true when: ansible_distribution == 'OracleLinux' and ansible_distribution_major_version == '8' - name: Start and enable Oracle Linux 9 modular 'ro' virtualization services ansible.builtin.systemd: state: started name: "virt{{ item }}d-ro.socket" enabled: true loop: - qemu - network - nodedev - nwfilter - secret - storage - interface - proxy when: ansible_distribution == 'OracleLinux' and ansible_distribution_major_version == '9' - name: Start and enable Oracle Linux 9 modular 'admin' virtualization services ansible.builtin.systemd: state: started name: "virt{{ item }}d-admin.socket" enabled: true loop: - qemu - network - nodedev - nwfilter - secret - storage - interface - proxy when: ansible_distribution == 'OracleLinux' and ansible_distribution_major_version == '9' - name: Start and enable cockpit ansible.builtin.systemd: state: started name: cockpit.socket enabled: true - name: Open firewall for cockpit and virsh ansible.posix.firewalld: zone: public service: "{{ item }}" permanent: true state: enabled immediate: true loop: - libvirt - libvirt-tls - name: Add user to libvirt and qemu group ansible.builtin.user: name: "{{ username }}" groups: libvirt,qemu append: true EOF
Run the deploy_kvm playbook.
ansible-playbook deploy_kvm.yml -e username="oracle"
Validate the host supports hardware virtualization.
virt-host-validate qemu
Example Output:
[oracle@ocne ~]$ virt-host-validate qemu QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : PASS QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : WARN (No ACPI IVRS table found, IOMMU either disabled in BIOS or not supported by this hardware platform) QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
The hardware virtualization check must return a PASS status. Without this, it is not possible to use the libvirt provider to install the cluster successfully.
Install Oracle Cloud Native Environment Runtime
Install the Oracle Cloud Native Environment Command Line Interface (CLI) onto an Oracle host. The CLI provides the ocne
executable that we'll use to configure the environment and install clusters.
List all running virtual machines.
virsh list
As expected, the output returns no running systems.
List the available Oracle Cloud Native Environment packages.
sudo dnf search ocne
Example Output:
[oracle@ocne ~]$ sudo dnf search ocne Last metadata expiration check: 0:10:51 ago on Mon 23 Sep 2024 12:02:41 PM GMT. =========================================================== Name Matched: ocne ============================================================ oracle-ocne-release-el8.src : Oracle Cloud Native Environment yum repository configuration oracle-ocne-release-el8.x86_64 : Oracle Cloud Native Environment yum repository configuration
Install the repository package.
Oracle Linux 8
sudo dnf install -y oracle-ocne-release-el8
Oracle Linux 9
sudo dnf install -y oracle-ocne-release-el9
List the currently enabled repositories.
sudo dnf repolist
Notice that there is no Oracle Cloud Native Environment repository listed.
Enable the repository.
Oracle Linux 8
sudo dnf config-manager --enable ol8_ocne
Oracle Linux 9
sudo dnf config-manager --enable ol9_ocne
Confirm the repository is enabled.
sudo dnf repolist
Install the CLI package.
sudo dnf install -y ocne
Create a Single Node Cluster
Create a single node Oracle Cloud Native Environment cluster.
ocne cluster start
Depending on your machine's available resources, the cluster creation can take several minutes to complete while it downloads the image source and sets it up.
Once completed, enter
y
to complete the installation and return to the command prompt. Ignore the rest of the post-install steps and proceed to the next step.Example Output:
Run the following command to create an authentication token to access the UI: KUBECONFIG='/home/oracle/.kube/kubeconfig.ocne.local' kubectl create token ui -n ocne-system Browser window opened, enter 'y' when ready to exit: y INFO[2024-09-23T12:31:55Z] Post install information: To access the cluster from the VM host: copy /home/oracle/.kube/kubeconfig.ocne.vm to that host and run kubectl there To access the cluster from this system: use /home/oracle/.kube/kubeconfig.ocne.local To access the UI, first do kubectl port-forward to allow the browser to access the UI. Run the following command, then access the UI from the browser using via https://localhost:8443 kubectl port-forward -n ocne-system service/ui 8443:443 Run the following command to create an authentication token to access the UI: kubectl create token ui -n ocne-system
Install the Kubernetes command line tool (kubevirt )
sudo dnf install -y kubectl
Configure kubectl to use the newly created cluster.
export KUBECONFIG=$HOME/.kube/kubeconfig.ocne.local
Confirm that the cluster consists of only one node.
kubectl get nodes --all-namespaces
Example Output:
[oracle@ocne ~]$ kubectl get nodes --all-namespaces NAME STATUS ROLES AGE VERSION ocne-control-plane-1 Ready control-plane 3m4s v1.30.3+1.el8
Confirm the successful deployment of the cluster.
kubectl get deployments --all-namespaces
Example Output:
[oracle@ocne ~]$ kubectl get deployments --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system coredns 2/2 2 2 15m ocne-system ocne-catalog 1/1 1 1 15m ocne-system ui 1/1 1 1 15m
List all of the pods deployed.
kubectl get pods --all-namespaces
Example Output:
[oracle@ocne ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-8fbm2 1/1 Running 1 (16m ago) 16m kube-system coredns-f7d444b54-njk46 1/1 Running 0 16m kube-system coredns-f7d444b54-xn975 1/1 Running 0 16m kube-system etcd-ocne-control-plane-1 1/1 Running 0 16m kube-system kube-apiserver-ocne-control-plane-1 1/1 Running 0 16m kube-system kube-controller-manager-ocne-control-plane-1 1/1 Running 0 16m kube-system kube-proxy-jsfqs 1/1 Running 0 16m kube-system kube-scheduler-ocne-control-plane-1 1/1 Running 0 16m ocne-system ocne-catalog-578c959566-75rr5 1/1 Running 0 16m ocne-system ui-84dd57ff69-grxlk 1/1 Running 0 16m
Create a Multi-Node Cluster
Create a new cluster with 1 Control node and 1 Worker node.
ocne cluster start --cluster-name test --control-plane-nodes 1 --worker-nodes 1
Where:
- --cluster-name (-C) : The name to use for the new cluster.
- --control-plane-nodes (-n) : States the number of control plane nodes to provision.
- --worker-nodes (-w) : States the number of worker nodes to provision.
See more details here
Confirm the new cluster exists.
ocne cluster list
Example Output:
[oracle@ocne ~]$ ocne cluster list ocne test
Configure kubectl to use the newly created cluster.
export KUBECONFIG=$HOME/.kube/kubeconfig.test.local
Confirm the creation of the new test cluster nodes.
kubectl get nodes --all-namespaces
Example Output:
[oracle@ocne ~]$ kubectl get nodes --all-namespaces NAME STATUS ROLES AGE VERSION test-control-plane-1 Ready control-plane 33m v1.30.3+1.el8 test-worker-1 Ready <none> 32m v1.30.3+1.el8
Delete a Cluster
The ability to remove a cluster is an integral part of maintaining your environment.
Confirm the names of any existing clusters.
ocne cluster list
The output displays a list showing the ocne and test clusters.
Remove the test cluster.
ocne cluster delete --cluster-name test
Where:
- --cluster-name (-C) : The name to use for the new cluster.
Example Output:
[oracle@ocne ~]$ ocne cluster delete --cluster-name test INFO[2024-09-24T18:36:18Z] Deleting volume test-control-plane-1-init.ign INFO[2024-09-24T18:36:18Z] Deleting volume test-control-plane-1.qcow2 INFO[2024-09-24T18:36:18Z] Deleting volume test-worker-1-init.ign INFO[2024-09-24T18:36:18Z] Deleting volume test-worker-1.qcow2 INFO[2024-09-24T18:36:18Z] Deleting file /home/oracle/.kube/kubeconfig.test.local INFO[2024-09-24T18:36:18Z] Deleting file /home/oracle/.kube/kubeconfig.test.vm
Confirm the removal of the cluster.
ocne cluster list
Only the ocne cluster remains in the list.
(Optional) Install Bash Command Line Completion
Installing support for ocne Bash completion helps you to type ocne
commands faster and easier.
Install Bash completion for the
ocne
executable.ocne completion bash | sudo tee /etc/bash_completion.d/ocne > /dev/null
Start a new Bash shell for this to take effect.
Note: This requires the bash-completion package to be present on your system. If it is not present it can be installed using this command:
sudo dnf install -y bash-completion
.Log off from the terminal.
exit
Re-connect again via SSH to the ocne instance.
ssh oracle@<ip_address_of_instance>
Confirm Bash completion works.
Type the
ocne
command<SPACE>
and press the Tab key twice. The Bash prompt returns the available sub-commands for theocne
command.Example Output:
[oracle@ocne ~]$ ocne application (Manage ocne applications) catalog (Manage ocne catalogs) cluster (Manage ocne clusters) completion (Generate the autocompletion script for the specified shell) help (Help about any command) image (Manage ocne images) node (Manage ocne nodes)
Next steps
Using the libvirt provider is a helpful tool for local testing and development. You now know how to use the libvirt provider to install and manage Oracle Cloud Native Environment 2. However, this is only the start. Check out the Oracle Linux Training Station for additional tutorials and content.