Install Oracle Cloud Native Environment Using the Oracle Cloud Infrastructure Provider
Introduction
Oracle Cloud Native Environment includes a Command Line Interface (CLI) that can manage the life cycle of Kubernetes clusters using OSTree-based container images. Oracle Cloud Native Environment includes several provider types you can use to create a Kubernetes cluster, and this tutorial introduces how to use the ocne cluster
command to deploy a Kubernetes cluster to Oracle Cloud Infrastructure using the oci provider.
Creating a cluster on Oracle Cloud Infrastructure requires you to provide details of an existing tenancy and have all the necessary privileges to create and destroy compute instances. The oci provider requires a compartment that is available to use. The Compartment can be specified by referencing its Oracle Cloud Identifier (OCID) or its path in the compartment hierarchy.
For more information about Oracle Cloud Native Environment 2, please refer to the current Release Documentation .
Objectives
In this tutorial, you'll learn to:
- Install the prerequisites for the oci provider
- Use the oci provider to create and manage a Kubernetes cluster
Prerequisites
Minimum of one Oracle Linux instance
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
OCI cluster creation requires access to the following resources in an Oracle Cloud Infrastructure tenancy:
- Virtual cloud network with four subnets
- Network load balancer
- Object Storage bucket with minimum 5 GiB available
- Compute Custom Image
- Compute Arm Shape for the control plane node
- VM.Standard.A1.Flex with two OCPU and 12 memory
- Compute for each additional control plane and worker node
- VM.Standard.E4.Flex with four OCPU and 64 memory
Configure the Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne2
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Increase the Boot volume size.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ocne" type: "server" boot_volume_size_in_gbs: 128 install_ocne_rpm: true EOF
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@instances.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of the Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Setup the Management System
To create a Kubernetes cluster on Oracle Cloud Infrastructure, you must set up the Oracle Cloud Infrastructure CLI and create an Object Storage bucket.
Open a new terminal.
Set up the Oracle Cloud Infrastructure CLI.
Install and configure the Oracle Cloud Infrastructure CLI. Ensure you set up the key pair and configuration file. For information on setting up the CLI, see the Oracle Cloud Infrastructure documentation .
You can set this up in the free lab environment by copying the information from the provided desktop to the ocne instance.
export IP=<ip_address_of_instance>
scp -r ~/.oci oracle@$IP:~ ssh oracle@$IP ' chmod 600 ~/.oci/oci.key ~/.oci/config sed -i "s/luna.user/oracle/g" ~/.oci/config '
ssh oracle@$IP 'bash -s' << EOF echo "export OCI_COMPARTMENT_OCID=$OCI_COMPARTMENT_OCID" | tee -a /home/oracle/.bashrc > /dev/null EOF
Connect via SSH to the ocne instance.
ssh oracle@$IP
Install the Oracle Cloud Infrastructure CLI.
Oracle Linux 8:
sudo dnf install -y oraclelinux-developer-release-el8 sudo dnf install -y python36-oci-cli
Oracle Linux 9:
sudo dnf install -y oraclelinux-developer-release-el9 sudo dnf install -y python39-oci-cli
Generate a random string for the Object Storage bucket name.
export OBJ_STR=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 1)
Create the ocne-images Object Storage bucket in OCI.
oci os bucket create --compartment-id $OCI_COMPARTMENT_OCID --name ocne-images-$OBJ_STR
During the installation,
ocne cluster start
checks for a cached copy of the Oracle Container Host for Kubernetes (OCK) image locally. If the image does not exist locally, the installation pulls it and then converts and imports it into Object Storage. Then, the installation copies the image from Object Storage to a Custom Image, which it uses to create the control plane and worker instances.Verify the Object Storage bucket exists.
oci os bucket list -c $OCI_COMPARTMENT_OCID
Deploy cluster.
ocne cluster start -u false -c <( echo " provider: oci name: mycluster controlPlaneNodes: 1 workerNodes: 3 providers: oci: imageBucket: ocne-images-$OBJ_STR compartment: $OCI_COMPARTMENT_OCID " )
If you need additional diagnostics, use
--log-level debug
as part of theocne cluster start
. The-u false
flag will not prompt you to proceed at the end of starting the cluster due to the User Interface (UI) looking for a browser. Additionally, you can passheadless: true
to the configuration section if you do not need the UI.Monitor the installation.
It will take a while to complete the installation. Eventually, it should proceed through the various steps and complete by returning you to the bash prompt without error.
Get the Cluster Nodes
Get a list of known clusters using the CLI.
ocne cluster list
Get the location of the kube configuration.
ocne cluster show -C mycluster
We use the
-C
to specify a specific cluster from the cluster list.Set the KUBECONFIG environment variable.
export KUBECONFIG=$(ocne cluster show -C mycluster)
Wait for the cluster to stabilize and all pods to report in a running state.
watch kubectl get pods -A
Once all the pods show a STATUS of Running, type
ctrl-c
to exit thewatch
command.
Create a Small Second Cluster
The Oracle Cloud Native Environment allows you to create multiple clusters from the same CLI location and manage their resources.
Deploy cluster.
ocne cluster start -c <( echo " provider: oci headless: true name: minicluster workerNodes: 1 providers: oci: compartment: $OCI_COMPARTMENT_OCID " )
This creates a cluster with a minimum of one control plane and one worker node. Oracle Cloud Native Environment using the oci provider requires you to use at least one worker node to deploy pods to the cluster. This requirement is because the control plane has a taint applied that restricts pod deployment to that node.
Note that we do not define the
imageBucket:
in the configuration for this cluster. Theocne cluster start
will automatically look for the previous OCK Custom Images in the OCI tenancy compartment and ignores the Object Storage bucket. Alternatively, the installer will upload and import the local cached copy if the images don't exist.List the available clusters.
ocne cluster list
The output should display both the mycluster and the minicluster.
Remove a Cluster
Check if you have the KUBECONFIG environment variable set.
env | grep -i kubeconfig
You'll need to unset this variable before proceeding if it is set. That is because
ocne
will assume the cluster that KUBECONFIG refers to is the management node. Since we created our clusters using the defaults, the value ofselfManaged
is set tofalse
; therefore, the installation uses an ephemeral cluster on the libvirt host as the management node.unset KUBECONFIG
Delete the cluster.
ocne cluster delete -C mycluster
This should remove all OCI objects associated with the specific cluster except for the Custom Images and Object Storage. These custom images are reusable when creating a cluster of the same version. If you want to remove these items, you must perform that task manually using the OCI CLI, OCI SDK, or the Cloud Console.
Next Steps
Using the oci provider helps create a Kubernetes cluster within Oracle Cloud infrastructure via the Cluster API. We only covered the basics in this tutorial, but learn more by checking out the Oracle Cloud Native Environment documentation and the Kubernetes Cluster API Provider for Oracle Cloud Infrastructure project.