Deploy an External Load Balancer with Oracle Cloud Native Environment

0
0
Send lab feedback

Deploy an External Load Balancer with Oracle Cloud Native Environment

Introduction

Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O and RunC. CRI-O manages the container runtime for a Kubernetes cluster, which defaults to RunC.

Objectives

At the end of this tutorial, you should be able to do the following:

  • Configure the Kubernetes cluster with the Oracle Cloud Infrastructure load balancer to enable high availability
  • Configure Oracle Cloud Native Environment on a 5-node cluster
  • Verify Load Balancer failover between the control plane nodes completes successfully

Support Note: We recommend using an external load balancer such as Oracle Cloud Infrastructure Load Balancer for production deployments.

Prerequisites

The free lab environment uses the following host systems:

  • 6 Oracle Linux instances for Oracle Cloud Native Environment:

    • An operator node (ocne-operator)
    • 3 Kubernetes control plane nodes (ocne-control-01, ocne-control-02, ocne-control-03 )
    • 2 Kubernetes worker nodes (ocne-worker-01, ocne-worker-02)
  • An Oracle Linux system for installing kubectl (devops-node)

    Note: We recommend that production environments have a cluster with at least five control plane nodes and three worker nodes.

  • Configure each system with the following:

    • The latest Oracle Linux with the Unbreakable Enterprise Kernel Release 7 (UEK R7)
    • An oracle user account with sudo access
    • Key-based SSH, also known as passwordless SSH, between the instances

(Optional) Set up Oracle Cloud Infrastructure Load Balancer

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

In the free lab environment, the load balancer setup and configuration steps in this section are not required as the lab deployment completes them. We provide these steps for those wanting to replicate them on their Oracle Cloud Infrastructure tenancy.

Create an Oracle Cloud Infrastructure Load Balancer

  1. Login to the Cloud Console.

    public_ip_address

  2. Click on the navigation menu in the page's top-left corner, then Networking and Load Balancers.

    public_ip_address

  3. The Load Balancers page displays.

    public_ip_address

  4. Locate your compartment within the Compartments drop-down list.

    public_ip_address

  5. Click on the Create Load Balancer button.

    public_ip_address

  6. In the pop-up dialog box, click the Create Load Balancer button to select the default Load Balancer type.

    public_ip_address

Update Load Balancer Details

  1. Locate the default Visibility type section and click the Private option.

    public_ip_address

  2. Scroll down the page to the Choose Networking section.

  3. Select the value from the drop-down list box for Visual Cloud Network and Subnet.

    Note: The values in the image differ each time initiating the Lab.

    public_ip_address

  4. Click the Next button to move to the next step.

Set Load Balancer Policy and Protocol

  1. Set the Load Balancing Policy and Health Check Protocol.

    • Accept the default Load Balancing Policy which is Weighted Round Robin.
  2. Enter the settings shown in the Specify Health Check Policy section.

    • Under Protocol select TCP.
    • Change the Port value from 80 to 6443.

    public_ip_address

Add Backend Nodes

  1. Click the Add Backends button, and its dialog window opens.

    public_ip_address

  2. Select the following nodes, and click the Add Selected Backends button.

    • ocne-control-01
    • ocne-control-02
    • ocne-control-03
  3. Update the Port column for each of the newly selected backend servers from the default value of 80 to 6443.

    public_ip_address

  4. Click the Next button to proceed to the next step.

Configure the Load Balancer Listener

  1. Select the TCP button.

  2. Change the Port used for ingress traffic from 443 to 6443.

    The entered values should look like this image.

    public_ip_address

  3. Click the Next button.

Configure Load Balancer Logging

  1. The final step during the Setup process is the Manage Logging option.

    public_ip_address

    This scenario requires no changes, so click the Submit button to create the load balancer.

Load Balancer Information

  1. An overview page displays the newly created load balancer.

    public_ip_address

    Note: The Overall Health and Backend Sets Health sections may not display as a green OK because we have yet to create the Kubernetes cluster for Oracle Cloud Native Environment.

Create a Platform CLI Configuration File

Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about creating the environments and modules. Using a configuration file saves repeated entries of various Platform CLI command options.

Note: If entering more than one control plane node in the myenvironment.yaml when configuring the Oracle Cloud Native Environment, then olcnectl requires a load balancer entry. The load balancer details for the free lab environment is the Oracle Cloud Infrastructure Load Balancer). The entry is a new argument (load-balancer: <enter-your-OCI-LB-ip-here>) in the myenvironment.yaml.

Example Configuration File:

environments:
  - environment-name: myenvironment
    globals:
      api-server: 127.0.0.1:8091
      secret-manager-type: file
      olcne-ca-path: /etc/olcne/configs/certificates/production/ca.cert
      olcne-node-cert-path: /etc/olcne/configs/certificates/production/node.cert
      olcne-node-key-path:  /etc/olcne/configs/certificates/production/node.key
    modules:
      - module: kubernetes
        name: mycluster
        args:
          container-registry: container-registry.oracle.com/olcne
          load-balancer: 10.0.0.<XXX>:6443
          master-nodes: ocne-control-01:8090,ocne-control-02:8090,ocne-control-03:8090
          worker-nodes: ocne-worker-01:8090,ocne-worker-02:8090
          selinux: enforcing
          restrict-service-externalip: true
          restrict-service-externalip-ca-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert
          restrict-service-externalip-tls-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/node.cert
          restrict-service-externalip-tls-key: /etc/olcne/configs/certificates/restrict_external_ip/production/node.key

During the free lab deployment, a configuration file is automatically generated and ready to use in the exercise. More information on manually creating a configuration file is in the Using a Configuration File documentation.

Update the Configuration File

  1. Open a terminal and connect via ssh to the operator node.

    ssh oracle@<ip_address_of_operator_node>
  2. View the configuration file contents.

    cat ~/myenvironment.yaml
  3. Add the load balancer IPv4 address value to the configuration file.

    The free lab environment completes the following preparatory steps during deployment on the operator node, which allows the oci command to succeed.

    • Install Oracle Cloud Infrastructure CLI.

    • Look up the OCID of the Compartment hosting the Oracle Cloud Native Environment.

    • Add these environment variables to the users ~/.bashrc file.

      • COMPARTMENT_OCID=<compartment_ocid>
      • LC_ALL=C.UTF-8
      • LANG=C.UTF-8
    LB_IP=$(oci lb load-balancer list --auth instance_principal --compartment-id $COMPARTMENT_OCID | jq -r '.data[]."ip-addresses"[]."ip-address"')
    sed -i "14i\          load-balancer: $LB_IP:6443" ~/myenvironment.yaml
  4. Confirm the load balancer value in the configuration file.

    cat ~/myenvironment.yaml

Create the Environment and Kubernetes Module

  1. Create the environment.

    cd ~
    olcnectl environment create --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl environment create --config-file myenvironment.yaml
    Environment myenvironment created.
  2. Create the Kubernetes module.

    olcnectl module create --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module create --config-file myenvironment.yaml
    Modules created successfully.
  3. Validate the Kubernetes module.

    olcnectl module validate --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module validate --config-file myenvironment.yaml
    Validation of module mycluster succeeded.

    In the free lab environment, there are no validation errors. The command's output provides the steps required to fix the nodes if there are any errors.

  4. Install the Kubernetes module.

    olcnectl module install --config-file myenvironment.yaml

    Note: The deployment of Kubernetes to the nodes will take several minutes to complete.

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module install --config-file myenvironment.yaml
    Modules installed successfully.
  5. Verify the deployment of the Kubernetes module.

    olcnectl module instances --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml
    INSTANCE              MODULE      STATE    
    mycluster             kubernetes  installed
    ocne-control-01:8090  node        installed
    ocne-control-02:8090  node        installed
    ocne-control-03:8090  node        installed
    ocne-worker-01:8090   node        installed
    ocne-worker-02:8090   node        installed

Set up the Kubernetes Command Line Tool

  1. Open a terminal and connect via ssh to the devops node.

    ssh oracle@<ip_address_of_devops_node>
  2. Configure the node to run kubectl.

    mkdir -p $HOME/.kube
    ssh -o StrictHostKeyChecking=no 10.0.0.150 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
    source $HOME/.bashrc
  3. Verify kubectl works.

    kubectl get nodes -o wide

    Example Output:

    [oracle@devops-node ~]$ kubectl get nodes -o wide
    NAME              STATUS   ROLES           AGE     VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                  KERNEL-VERSION                   CONTAINER-RUNTIME
    ocne-control-01   Ready    control-plane   8m23s   v1.28.3+3.el8   10.0.0.150    <none>        Oracle Linux Server 8.9   5.15.0-202.135.2.el8uek.x86_64   cri-o://1.28.2
    ocne-control-02   Ready    control-plane   7m49s   v1.28.3+3.el8   10.0.0.151    <none>        Oracle Linux Server 8.9   5.15.0-202.135.2.el8uek.x86_64   cri-o://1.28.2
    ocne-control-03   Ready    control-plane   5m31s   v1.28.3+3.el8   10.0.0.152    <none>        Oracle Linux Server 8.9   5.15.0-202.135.2.el8uek.x86_64   cri-o://1.28.2
    ocne-worker-01    Ready    <none>          5m5s    v1.28.3+3.el8   10.0.0.160    <none>        Oracle Linux Server 8.9   5.15.0-202.135.2.el8uek.x86_64   cri-o://1.28.2
    ocne-worker-02    Ready    <none>          5m2s    v1.28.3+3.el8   10.0.0.161    <none>        Oracle Linux Server 8.9   5.15.0-202.135.2.el8uek.x86_64   cri-o://1.28.2

Confirm the Load Balancer Manages a Control Plane Node Outage

The installation of Oracle Cloud Native Environment with three control plane nodes behind an external load balancer is complete.

This section confirms that the external load balancer will detect when a control plane node fails and remove it from the Round Robin traffic distribution policy. The testing steps demonstrate that when the 'missing' node recovers, it automatically rejoins the cluster and becomes available to handle cluster-based traffic again.

Confirm the Load Balancer is Active

If using the free lab environment, use the Oracle Linux Lab Basics instructions to access the Cloud Console and find your compartment.

Access the Load Balancer Details

  1. Connect and log in to the Cloud Console.

    public_ip_address

  2. Click on the navigation menu in the page's top-left corner, then Networking and Load Balancers.

    public_ip_address

  3. The Load Balancers page displays.

    public_ip_address

  4. Locate your compartment within the Compartments drop-down list.

    public_ip_address

  5. The pre-created Load Balancer summary details should look similar to this image.

    public_ip_address

  6. Click on the ocne-load-balancer link.

  7. Scroll down to the Backend Sets link on the left-hand side of the browser under the Resources heading.

  8. Click on the Backend Sets link.

    public_ip_address

    A link to the existing Backend Set displays.

  9. Click the ocne-lb-backend-set link.

    public_ip_address

  10. Click on the Backends link on the left-hand side of the browser under the Resources heading.

    public_ip_address

  11. The screen confirms there are three healthy nodes present.

    public_ip_address

Stop One of the Control Plane Node Instances

  1. Click on the navigation menu in the page's top-left corner, and navigate to Compute and then Instances.

    public_ip_address

  2. The Instances page displays.

    public_ip_address

  3. Click on one of the control plane nodes listed (for example: ocne-control-01).

  4. This displays the Instance details.

    public_ip_address

  5. Click on the Stop button.

  6. In the pop-up dialog box, select the Force stop the instance by immediately powering off checkbox, and then click the Force stop instance button.

    Note: Do NOT do this on a production system as it may incur data loss, corruption, or worse for the entire system.

    public_ip_address

  7. Wait until the Instance details page confirms the instance is Stopped.

    public_ip_address

Verify the Oracle Cloud Infrastructure Load Balancer Registers the Control Plane Node Failure

  1. Navigate back to Networking > Load Balancers.

  2. Click the ocne-load-balancer link.

  3. Under Resources on the lower left panel, click on Backend Sets.

  4. Then click on the displayed name in the Backend Sets table (example: ocne-lb-backend-set).

  5. Click Backends to display the nodes.

    The status on this page should update automatically within 2-3 minutes.

  6. The page initially displays a Warning status.

    public_ip_address

  7. A few minutes later, the status updates to a Critical status. This status indicates that the Oracle Cloud Infrastructure Load Balancer process confirms the node as unresponsive. Therefore the load balancer will no longer forward incoming requests to the unavailable backend control plane node.

    public_ip_address

Confirm the Oracle Cloud Native Environment Cluster Responds

Given at least two active members remain in the cluster, the active control plane nodes should respond to kubectl commands.

  1. Switch to the devops-node terminal session.

  2. Verify kubectl responds and reports the control plane node just stopped as NotReady (Unavailable).

    kubectl get nodes

    Example Output:

    [oracle@devops-node ~]$ kubectl get nodes
    NAME              STATUS   ROLES           AGE     VERSION
    ocne-control-01   NotReady control-plane   10m     v1.28.3+3.el8
    ocne-control-02   Ready    control-plane   10m     v1.28.3+3.el8
    ocne-control-03   Ready    control-plane   8m6s    v1.28.3+3.el8
    ocne-worker-01    Ready    <none>          7m40s   v1.28.3+3.el8
    ocne-worker-02    Ready    <none>          7m37s   v1.28.3+3.el8
    [oracle@devops-node ~]$

    *Note: It takes approximately 2-3 minutes before the NotReady status displays. Please repeat the kubectl get nodes command if necessary until the status changes from Ready to NotReady.

Restart the Stopped Control Plane Node

  1. Switch to the Cloud Console within the browser.

  2. Navigate to Compute > Instances.

  3. Click on the stopped control plane node.

  4. Start the instance by clicking the Start button.

    Wait until the Status section turns Green and confirms the Instance is Running.

    public_ip_address

Verify the Control Plane Node Rejoins the Oracle Cloud Infrastructure Load Balancer Cluster

  1. Navigate back to Networking > Load Balancers.

  2. Click the ocne-load-balancer link.

  3. Under Resources on the lower left panel, click on Backend Sets.

  4. Then click on the displayed name in the Backend Sets table (example: ocne-lb-backend-set).

  5. Click Backends to display the nodes.

    Note: Status changes may take 2-3 minutes.

  6. The Overall Health shows a Warning status until the node restarts and is detected.

    public_ip_address

  7. Once detected, the Overall Health reports as a green OK.

    public_ip_address

The control plane node rejoins the cluster, and all three control plane nodes participate in the round-robin distribution of incoming traffic to the cluster.

Get the Control Plane Node Status

  1. Switch to the devops-node terminal session.

  2. Verify kubectl responds and reports the control plane node as Ready (Available).

    kubectl get nodes

    Example Output:

    [oracle@devops-node ~]$ kubectl get nodes
    NAME              STATUS   ROLES           AGE     VERSION
    ocne-control-01   Ready    control-plane   10m     v1.28.3+3.el8
    ocne-control-02   Ready    control-plane   10m     v1.28.3+3.el8
    ocne-control-03   Ready    control-plane   8m6s    v1.28.3+3.el8
    ocne-worker-01    Ready    <none>          7m40s   v1.28.3+3.el8
    ocne-worker-02    Ready    <none>          7m37s   v1.28.3+3.el8
    [oracle@devops-node ~]$

Summary

These steps confirm that Oracle Cloud Infrastructure Load Balancer has been configured correctly and accepts requests successfully for Oracle Cloud Native Environment.

For More Information

SSR