Deploy an External Load Balancer with Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O and RunC. CRI-O manages the container runtime for a Kubernetes cluster, which defaults to RunC.
Objectives
At the end of this tutorial, you should be able to do the following:
- Configure the Kubernetes cluster with the Oracle Cloud Infrastructure load balancer to enable high availability
- Configure Oracle Cloud Native Environment on a 5-node cluster
- Verify Load Balancer failover between the control plane nodes completes successfully
Support Note: We recommend using an external load balancer such as Oracle Cloud Infrastructure Load Balancer for production deployments.
Prerequisites
The free lab environment uses the following host systems:
6 Oracle Linux instances for Oracle Cloud Native Environment:
- An operator node (ocne-operator)
- 3 Kubernetes control plane nodes (ocne-control-01, ocne-control-02, ocne-control-03 )
- 2 Kubernetes worker nodes (ocne-worker-01, ocne-worker-02)
An Oracle Linux system for installing
kubectl
(devops-node)Note: We recommend that production environments have a cluster with at least five control plane nodes and three worker nodes.
Configure each system with the following:
- The latest Oracle Linux with the Unbreakable Enterprise Kernel Release 7 (UEK R7)
- An
oracle
user account withsudo
access - Key-based SSH, also known as passwordless SSH, between the instances
(Optional) Set up Oracle Cloud Infrastructure Load Balancer
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
In the free lab environment, the load balancer setup and configuration steps in this section are not required as the lab deployment completes them. We provide these steps for those wanting to replicate them on their Oracle Cloud Infrastructure tenancy.
Create an Oracle Cloud Infrastructure Load Balancer
Login to the Cloud Console.
Click on the navigation menu in the page's top-left corner, then Networking and Load Balancers.
The Load Balancers page displays.
Locate your compartment within the Compartments drop-down list.
Click on the Create Load Balancer button.
In the pop-up dialog box, click the Create Load Balancer button to select the default Load Balancer type.
Update Load Balancer Details
Locate the default Visibility type section and click the Private option.
Scroll down the page to the Choose Networking section.
Select the value from the drop-down list box for Visual Cloud Network and Subnet.
Note: The values in the image differ each time initiating the Lab.
Click the Next button to move to the next step.
Set Load Balancer Policy and Protocol
Set the Load Balancing Policy and Health Check Protocol.
- Accept the default Load Balancing Policy which is Weighted Round Robin.
Enter the settings shown in the Specify Health Check Policy section.
- Under Protocol select TCP.
- Change the Port value from 80 to 6443.
Add Backend Nodes
Click the Add Backends button, and its dialog window opens.
Select the following nodes, and click the Add Selected Backends button.
- ocne-control-01
- ocne-control-02
- ocne-control-03
Update the Port column for each of the newly selected backend servers from the default value of 80 to 6443.
Click the Next button to proceed to the next step.
Configure the Load Balancer Listener
Select the TCP button.
Change the Port used for ingress traffic from 443 to 6443.
The entered values should look like this image.
Click the Next button.
Configure Load Balancer Logging
The final step during the Setup process is the Manage Logging option.
This scenario requires no changes, so click the Submit button to create the load balancer.
Load Balancer Information
An overview page displays the newly created load balancer.
Note: The Overall Health and Backend Sets Health sections may not display as a green OK because we have yet to create the Kubernetes cluster for Oracle Cloud Native Environment.
Create a Platform CLI Configuration File
Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about creating the environments and modules. Using a configuration file saves repeated entries of various Platform CLI command options.
Note: If entering more than one control plane node in the
myenvironment.yaml
when configuring the Oracle Cloud Native Environment, thenolcnectl
requires a load balancer entry. The load balancer details for the free lab environment is the Oracle Cloud Infrastructure Load Balancer). The entry is a new argument (load-balancer: <enter-your-OCI-LB-ip-here>
) in themyenvironment.yaml
.
Example Configuration File:
environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/configs/certificates/production/ca.cert olcne-node-cert-path: /etc/olcne/configs/certificates/production/node.cert olcne-node-key-path: /etc/olcne/configs/certificates/production/node.key modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne load-balancer: 10.0.0.<XXX>:6443 master-nodes: ocne-control-01:8090,ocne-control-02:8090,ocne-control-03:8090 worker-nodes: ocne-worker-01:8090,ocne-worker-02:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert restrict-service-externalip-tls-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/node.cert restrict-service-externalip-tls-key: /etc/olcne/configs/certificates/restrict_external_ip/production/node.key
During the free lab deployment, a configuration file is automatically generated and ready to use in the exercise. More information on manually creating a configuration file is in the Using a Configuration File documentation.
Update the Configuration File
Open a terminal and connect via ssh to the operator node.
ssh oracle@<ip_address_of_operator_node>
View the configuration file contents.
cat ~/myenvironment.yaml
Add the load balancer IPv4 address value to the configuration file.
The free lab environment completes the following preparatory steps during deployment on the operator node, which allows the
oci
command to succeed.Install Oracle Cloud Infrastructure CLI.
Look up the OCID of the Compartment hosting the Oracle Cloud Native Environment.
Add these environment variables to the users
~/.bashrc
file.COMPARTMENT_OCID=<compartment_ocid>
LC_ALL=C.UTF-8
LANG=C.UTF-8
LB_IP=$(oci lb load-balancer list --auth instance_principal --compartment-id $COMPARTMENT_OCID | jq -r '.data[]."ip-addresses"[]."ip-address"') sed -i "14i\ load-balancer: $LB_IP:6443" ~/myenvironment.yaml
Confirm the load balancer value in the configuration file.
cat ~/myenvironment.yaml
Create the Environment and Kubernetes Module
Create the environment.
cd ~ olcnectl environment create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl environment create --config-file myenvironment.yaml Environment myenvironment created.
Create the Kubernetes module.
olcnectl module create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module create --config-file myenvironment.yaml Modules created successfully.
Validate the Kubernetes module.
olcnectl module validate --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module validate --config-file myenvironment.yaml Validation of module mycluster succeeded.
In the free lab environment, there are no validation errors. The command's output provides the steps required to fix the nodes if there are any errors.
Install the Kubernetes module.
olcnectl module install --config-file myenvironment.yaml
Note: The deployment of Kubernetes to the nodes will take several minutes to complete.
Example Output:
[oracle@ocne-operator ~]$ olcnectl module install --config-file myenvironment.yaml Modules installed successfully.
Verify the deployment of the Kubernetes module.
olcnectl module instances --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml INSTANCE MODULE STATE mycluster kubernetes installed ocne-control-01:8090 node installed ocne-control-02:8090 node installed ocne-control-03:8090 node installed ocne-worker-01:8090 node installed ocne-worker-02:8090 node installed
Set up the Kubernetes Command Line Tool
Open a terminal and connect via ssh to the devops node.
ssh oracle@<ip_address_of_devops_node>
Configure the node to run
kubectl
.mkdir -p $HOME/.kube
ssh -o StrictHostKeyChecking=no 10.0.0.150 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
source $HOME/.bashrc
Verify
kubectl
works.kubectl get nodes -o wide
Example Output:
[oracle@devops-node ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ocne-control-01 Ready control-plane 8m23s v1.28.3+3.el8 10.0.0.150 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2 ocne-control-02 Ready control-plane 7m49s v1.28.3+3.el8 10.0.0.151 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2 ocne-control-03 Ready control-plane 5m31s v1.28.3+3.el8 10.0.0.152 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2 ocne-worker-01 Ready <none> 5m5s v1.28.3+3.el8 10.0.0.160 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2 ocne-worker-02 Ready <none> 5m2s v1.28.3+3.el8 10.0.0.161 <none> Oracle Linux Server 8.9 5.15.0-202.135.2.el8uek.x86_64 cri-o://1.28.2
Confirm the Load Balancer Manages a Control Plane Node Outage
The installation of Oracle Cloud Native Environment with three control plane nodes behind an external load balancer is complete.
This section confirms that the external load balancer will detect when a control plane node fails and remove it from the Round Robin traffic distribution policy. The testing steps demonstrate that when the 'missing' node recovers, it automatically rejoins the cluster and becomes available to handle cluster-based traffic again.
Confirm the Load Balancer is Active
If using the free lab environment, use the Oracle Linux Lab Basics instructions to access the Cloud Console and find your compartment.
Access the Load Balancer Details
Connect and log in to the Cloud Console.
Click on the navigation menu in the page's top-left corner, then Networking and Load Balancers.
The Load Balancers page displays.
Locate your compartment within the Compartments drop-down list.
The pre-created Load Balancer summary details should look similar to this image.
Click on the ocne-load-balancer link.
Scroll down to the Backend Sets link on the left-hand side of the browser under the Resources heading.
Click on the Backend Sets link.
A link to the existing Backend Set displays.
Click the ocne-lb-backend-set link.
Click on the Backends link on the left-hand side of the browser under the Resources heading.
The screen confirms there are three healthy nodes present.
Stop One of the Control Plane Node Instances
Click on the navigation menu in the page's top-left corner, and navigate to Compute and then Instances.
The Instances page displays.
Click on one of the control plane nodes listed (for example: ocne-control-01).
This displays the Instance details.
Click on the Stop button.
In the pop-up dialog box, select the Force stop the instance by immediately powering off checkbox, and then click the Force stop instance button.
Note: Do NOT do this on a production system as it may incur data loss, corruption, or worse for the entire system.
Wait until the Instance details page confirms the instance is Stopped.
Verify the Oracle Cloud Infrastructure Load Balancer Registers the Control Plane Node Failure
Navigate back to Networking > Load Balancers.
Click the ocne-load-balancer link.
Under Resources on the lower left panel, click on Backend Sets.
Then click on the displayed name in the Backend Sets table (example: ocne-lb-backend-set).
Click Backends to display the nodes.
The status on this page should update automatically within 2-3 minutes.
The page initially displays a Warning status.
A few minutes later, the status updates to a Critical status. This status indicates that the Oracle Cloud Infrastructure Load Balancer process confirms the node as unresponsive. Therefore the load balancer will no longer forward incoming requests to the unavailable backend control plane node.
Confirm the Oracle Cloud Native Environment Cluster Responds
Given at least two active members remain in the cluster, the active control plane nodes should respond to kubectl
commands.
Switch to the devops-node terminal session.
Verify
kubectl
responds and reports the control plane node just stopped asNotReady
(Unavailable).kubectl get nodes
Example Output:
[oracle@devops-node ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control-01 NotReady control-plane 10m v1.28.3+3.el8 ocne-control-02 Ready control-plane 10m v1.28.3+3.el8 ocne-control-03 Ready control-plane 8m6s v1.28.3+3.el8 ocne-worker-01 Ready <none> 7m40s v1.28.3+3.el8 ocne-worker-02 Ready <none> 7m37s v1.28.3+3.el8 [oracle@devops-node ~]$
*Note: It takes approximately 2-3 minutes before the NotReady status displays. Please repeat the
kubectl get nodes
command if necessary until the status changes fromReady
toNotReady
.
Restart the Stopped Control Plane Node
Switch to the Cloud Console within the browser.
Navigate to Compute > Instances.
Click on the stopped control plane node.
Start the instance by clicking the Start button.
Wait until the Status section turns Green and confirms the Instance is Running.
Verify the Control Plane Node Rejoins the Oracle Cloud Infrastructure Load Balancer Cluster
Navigate back to Networking > Load Balancers.
Click the ocne-load-balancer link.
Under Resources on the lower left panel, click on Backend Sets.
Then click on the displayed name in the Backend Sets table (example: ocne-lb-backend-set).
Click Backends to display the nodes.
Note: Status changes may take 2-3 minutes.
The Overall Health shows a Warning status until the node restarts and is detected.
Once detected, the Overall Health reports as a green OK.
The control plane node rejoins the cluster, and all three control plane nodes participate in the round-robin distribution of incoming traffic to the cluster.
Get the Control Plane Node Status
Switch to the devops-node terminal session.
Verify
kubectl
responds and reports the control plane node asReady
(Available).kubectl get nodes
Example Output:
[oracle@devops-node ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control-01 Ready control-plane 10m v1.28.3+3.el8 ocne-control-02 Ready control-plane 10m v1.28.3+3.el8 ocne-control-03 Ready control-plane 8m6s v1.28.3+3.el8 ocne-worker-01 Ready <none> 7m40s v1.28.3+3.el8 ocne-worker-02 Ready <none> 7m37s v1.28.3+3.el8 [oracle@devops-node ~]$
Summary
These steps confirm that Oracle Cloud Infrastructure Load Balancer has been configured correctly and accepts requests successfully for Oracle Cloud Native Environment.
For More Information
Deploy an External Load Balancer with Oracle Cloud Native Environment
Confirm the Load Balancer Manages a Control Plane Node Outage