How to Upgrade Oracle Cloud Native Environment

0
0
Send lab feedback

How To Upgrade Oracle Cloud Native Environment

Introduction

Administrators of an Oracle Cloud Native Environment deployment are aware that many of the components, such as Kubernetes, are regularly updated. The upgrade process updates these components to the most recent release allowing any new features to become available for users.

This tutorial will show how to upgrade an Oracle Cloud Native Environment Release 1.7 install to Release 1.8.

Note: Please note that Oracle Cloud Native Environment upgrades are incremental and not cumulative. This is due to a limitation from both the Kubernetes and Istio modules which require incremental upgrades.

Objectives

In this lab, you'll learn how to:

  • Upgrade an Oracle Cloud Native Environment install from Release 1.7 to Release 1.8.

Prerequisites

  • 3 Oracle Linux systems to use as:

    • Operator node (ocne-operator)
    • Kubernetes control plane node (ocne-control)
    • Kubernetes worker node (ocne-worker)
  • Each system should have the latest Oracle Linux 8 (x86_64) installed

  • This environment is pre-configured with:

    • An oracle user account (used during the install) with sudo access
    • Key-based SSH, also known as passwordless SSH, between the hosts
    • Installation of Oracle Cloud Native Environment and CCM module

Set Up Lab Environment

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

Information: The free lab deploys a running Oracle Cloud Native Environment with the CCM module. The deployment takes approximately 20 minutes to finish after launch. Therefore, you might want to step away while this runs and promptly return to complete the lab.

  1. Open a terminal and connect via ssh to the ocne-control node.

    ssh oracle@<ip_address_of_ol_node>

    Hint: Although most of the steps are completed in either of the ocne-control or ocne-operator nodes, it may save time if a Terminal session is opened to each of the nodes at this point.

Changing the Software Packages Source

  1. (On all nodes) Start out by registering each node to subscribe to the Oracle Cloud Native Environment Release 1.8 repository and disable the repositories for the previous releases.

    sudo dnf update oracle-olcne-release-el8
    sudo dnf config-manager --enable ol8_olcne18
    sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12
    

    Example Output:

    [oracle@ocne-control-01 ~]$ sudo dnf update oracle-olcne-release-el8
    Last metadata expiration check: 0:10:08 ago on Fri Feb  9 11:13:34 2024.
    Dependencies resolved.
    Nothing to do.
    Complete!
    [oracle@ocne-control-01 ~]$ sudo dnf config-manager --enable ol8_olcne18
    [oracle@ocne-control-01 ~]$ sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12

Upgrade the Operator Node

The Operator Node is upgraded with the new Oracle Cloud Native Environment packages.

  1. (Operator Node) Stop the olcne-api-server service.

    sudo systemctl stop olcne-api-server.service
  2. (Operator Node) Update the Oracle Cloud Native Environment packages on the Operator Node. Enter y and press Enter when prompted.

    sudo dnf update -y olcnectl olcne-api-server olcne-utils

    Example Output:

    [oracle@ocne-operator ~]$ sudo dnf update -y olcnectl olcne-api-server olcne-utils
    Oracle Cloud Native Environment version 1.8 (x86_64)                        839 kB/s |  45 kB     00:00    
    Dependencies resolved.
    =======================================================================================================================
     Package                          Architecture           Version                     Repository                   Size
    =======================================================================================================================
    Upgrading:
     olcne-api-server                 x86_64                 1.8.0-2.el8                 ol8_olcne18                 9.6 M
     olcne-utils                      x86_64                 1.8.0-2.el8                 ol8_olcne18                  85 k
     olcnectl                         x86_64                 1.8.0-2.el8                 ol8_olcne18                 4.6 M
    
    Transaction Summary
    =======================================================================================================================
    Upgrade  3 Packages
    
    Total download size: 14 M
    Downloading Packages:
    (1/3): olcne-utils-1.8.0-2.el8.x86_64.rpm                                              2.2 MB/s |  85 kB     00:00    
    (2/3): olcnectl-1.8.0-2.el8.x86_64.rpm                                                  37 MB/s | 4.6 MB     00:00    
    (3/3): olcne-api-server-1.8.0-2.el8.x86_64.rpm                                          44 MB/s | 9.6 MB     00:00    
    -----------------------------------------------------------------------------------------------------------------------
    Total                                                                                   64 MB/s |  14 MB     00:00     
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                               1/1 
      Running scriptlet: olcnectl-1.8.0-2.el8.x86_64                                                                   1/1 
      Upgrading        : olcnectl-1.8.0-2.el8.x86_64                                                                   1/6 
      Upgrading        : olcne-utils-1.8.0-2.el8.x86_64                                                                2/6 
      Running scriptlet: olcne-api-server-1.8.0-2.el8.x86_64                                                           3/6 
      Upgrading        : olcne-api-server-1.8.0-2.el8.x86_64                                                           3/6 
      Running scriptlet: olcne-api-server-1.8.0-2.el8.x86_64                                                           3/6 
      Cleanup          : olcne-utils-1.7.5-17.el8.x86_64                                                               4/6 
      Cleanup          : olcnectl-1.7.5-17.el8.x86_64                                                                  5/6 
      Running scriptlet: olcne-api-server-1.7.5-17.el8.x86_64                                                          6/6 
      Cleanup          : olcne-api-server-1.7.5-17.el8.x86_64                                                          6/6 
      Running scriptlet: olcne-api-server-1.7.5-17.el8.x86_64                                                          6/6 
      Verifying        : olcne-api-server-1.8.0-2.el8.x86_64                                                           1/6 
      Verifying        : olcne-api-server-1.7.5-17.el8.x86_64                                                          2/6 
      Verifying        : olcne-utils-1.8.0-2.el8.x86_64                                                                3/6 
      Verifying        : olcne-utils-1.7.5-17.el8.x86_64                                                               4/6 
      Verifying        : olcnectl-1.8.0-2.el8.x86_64                                                                   5/6 
      Verifying        : olcnectl-1.7.5-17.el8.x86_64                                                                  6/6 
    
    Upgraded:
      olcne-api-server-1.8.0-2.el8.x86_64        olcne-utils-1.8.0-2.el8.x86_64        olcnectl-1.8.0-2.el8.x86_64       
    
    Complete!
  3. (Operator Node) Start the olcne-api-server service.

    sudo systemctl start olcne-api-server.service

Upgrade the Kubernetes Nodes

Next, each of the Kubernetes nodes are upgraded with the new Oracle Cloud Native Environment packages.

The olcnetctl command upgrades the Platform Agent on each node in the environment. If you are following the steps in the Luna Lab, then the nodes that will be upgraded are those in the the module called mycluster. This module is part of the myenvironment environment as defined in the myenvironment.yaml file.

  1. (On Operator Node) Review the myenvironment.yaml file to understand the configuration being used and verify which Kubernetes nodes will be upgraded. For the Luna Lab, these are master-nodes: ocne-control:8090 and worker-nodes: ocne-worker:8090.

    cat myenvironment.yaml

    Example Output:

    [oracle@ocne-operator ~]$ cat myenvironment.yaml
    environments:
      - environment-name: myenvironment
        globals:
          api-server: 127.0.0.1:8091
          secret-manager-type: file
          olcne-ca-path: /etc/olcne/configs/certificates/production/ca.cert
          olcne-node-cert-path: /etc/olcne/configs/certificates/production/node.cert
          olcne-node-key-path:  /etc/olcne/configs/certificates/production/node.key
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes:
                - ocne-control-01.lv.vcn742f8c75.oraclevcn.com:8090
              worker-nodes:
                - ocne-worker-01.lv.vcn742f8c75.oraclevcn.com:8090
              selinux: enforcing
              restrict-service-externalip: true
              restrict-service-externalip-ca-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/ca.cert
              restrict-service-externalip-tls-cert: /etc/olcne/configs/certificates/restrict_external_ip/production/node.cert
              restrict-service-externalip-tls-key: /etc/olcne/configs/certificates/restrict_external_ip/production/node.key
  2. (On Operator Node) Upgrade the Platform Agent on all nodes in the myenvironment environment.

    Note: This will take a couple of minutes to complete.

    olcnectl environment update olcne --environment-name myenvironment

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl environment update olcne --environment-name myenvironment
    Taking backup of modules before update
    Backup of modules succeeded.
    Updating modules
    Update successful

Upgrading the Kubernetes Cluster

The next step is to upgrade the Kubernetes cluster to Kubernetes Release 1.28.3.

  1. (On Operator Node) This command uses the olcnectl command to upgrade all of the Kubernetes nodes to the latest Kubernetes version available for Oracle Cloud Native Environment. In the Luna Lab, the nodes that will be upgraded are defined in the mycluster section of the myenvironment.yaml file (see earlier), and the nodes that will be upgraded are: ocne-control and ocne_worker. Enter y when prompted and press Enter to initiate the upgrade.

    Note: This may take 3-5 minutes to complete.

    olcnectl module update --environment-name myenvironment --name mycluster --kube-version 1.28.3

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module update --environment-name myenvironment --name mycluster --kube-version 1.28.3 
    **? [WARNING] Update will shift your workload and some pods will lose data if they**
    **? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue?** Yes
    Taking backup of modules before update
    Backup of modules succeeded.
    Updating modules
    Update successful

    The completion of this command indicates that each node in the Kubernetes cluster has been successfully upgraded to the latest Kubernetes release and has been validated as being healthy, this can be confirmed by a few simple tests shown in the next step.

Confirm the Kubernetes Nodes have been upgraded

  1. (On Control Node) Confirm that Kubernetes has been upgraded to 1.28.3 (refer to the VERSION column)

    kubectl get nodes

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get nodes
    NAME              STATUS   ROLES           AGE     VERSION
    ocne-control-01   Ready    control-plane   33m   v1.28.3+3.el8
    ocne-worker-01    Ready    <none>          33m   v1.28.3+3.el8
  2. (On Control Node) Confirm that all the Oracle Cloud Native Environment components are running as expected.

    kubectl get pods --all-namespaces

    Example Output:

    [oracle@ocne-control-01 ~]$ kubectl get pods --all-namespaces
    NAMESPACE              NAME                                          READY   STATUS    RESTARTS      AGE
    kube-system            coredns-5bcdd9fbb4-4lqtc                      1/1     Running   1             9m30s
    kube-system            coredns-5bcdd9fbb4-zjplg                      1/1     Running   1             9m29s
    kube-system            etcd-ocne-control-01                          1/1     Running   0             4m24s
    kube-system            kube-apiserver-ocne-control-01                1/1     Running   0             4m24s
    kube-system            kube-controller-manager-ocne-control-01       1/1     Running   0             4m24s
    kube-system            kube-flannel-ds-4l7hr                         1/1     Running   1             8m14s
    kube-system            kube-flannel-ds-bvzbb                         1/1     Running   1             8m21s
    kube-system            kube-proxy-gqkkm                              1/1     Running   1             5m23s
    kube-system            kube-proxy-v4h47                              1/1     Running   1             5m29s
    kube-system            kube-scheduler-ocne-control-01                1/1     Running   0             4m24s
    kubernetes-dashboard   kubernetes-dashboard-544b589cf-htwfn          1/1     Running   0             3m55s
    ocne-modules           verrazzano-module-operator-74f5db6ccd-k9pmf   1/1     Running   1             8m21s
    verrazzano-install     verrazzano-module-operator-5dd6c76f55-2t8cb   1/1     Running   2 (80s ago)   3m55s
  3. (On Operator Node) It is possible to get a more complete report of the Oracle Cloud Native Environment's settings using the olcne module report command. This can be returned in either YAML format or as a formatted table at the CLI. However the formatted table requires that the Terminal is configured to use UTF-8 encoding. If following the Lab, this can be set as shown in the screenshot below.

    utf8-encoding

  4. (On Operator Node) View the environment as a formatted table.

    olcnectl module report --environment-name myenvironment --name mycluster --children

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children
    ╭─────────────────────────────────────────────────────────────────────┬─────────────────────────╮
    │ myenvironment                                                       │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ mycluster                                                           │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ Property                                                            │ Current Value           │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ extra-node-operations-update                                        │ running                 │
    │ kubectl                                                             │                         │
    │ cloud-provider                                                      │                         │
    │ podnetworking                                                       │ running                 │
    │ module-operator                                                     │ running                 │
    │ externalip-webhook                                                  │ uninstalled             │
    │ extra-node-operations                                               │                         │
    │ status_check                                                        │ healthy                 │
    │ master:ocne-control-01:8090                                         │                         │
    │ worker:ocne-worker-01:8090                                          │                         │
    │ kubecfg                                                             │ file exist              │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ ocne-control-01:8090                                                │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    ...
    ...
    │ 10249/tcp                                                           │ closed                  │
    │ 10250/tcp                                                           │ closed                  │
    │ 10255/tcp                                                           │ closed                  │
    │ 8472/udp                                                            │ closed                  │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ service                                                             │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ containerd.service                                                  │ not enabled/not running │
    │ crio.service                                                        │ enabled/running         │
    │ kubelet.service                                                     │ enabled                 │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ kubecfg                                                             │ file exist              │
    │ kernel                                                              │ 4.12.0                  │
    │ package                                                             │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ kubeadm                                                             │ 1.28.3-3.el8            │
    │ kubectl                                                             │ 1.28.3-3.el8            │
    │ kubelet                                                             │ 1.28.3-3.el8            │
    │ helm                                                                │ 3.12.0-4.el8            │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ swap                                                                │ off                     │
    │ selinux                                                             │ permissive              │
    │ connectivity                                                        │                         │
    ╰─────────────────────────────────────────────────────────────────────┴─────────────────────────╯
    [oracle@ocne-operator ~]$
  5. (On Operator Node) The same information is also available in YAML format.

    olcnectl module report --environment-name myenvironment --name mycluster --children --format yaml

    Example Output:

    [oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children --format yaml
    Environments:
      myenvironment:
        ModuleInstances:
        - Name: mycluster
          Properties:
          - Name: podnetworking
            Value: running
          - Name: extra-node-operations
          - Name: extra-node-operations-update
            Value: running
          - Name: kubectl
          - Name: cloud-provider
          - Name: worker:ocne-worker-01:8090
          - Name: kubecfg
            Value: file exist
          - Name: status_check
            Value: healthy
          - Name: master:ocne-control-01:8090
          - Name: externalip-webhook
            Value: uninstalled
          - Name: module-operator
            Value: running
        - Name: ocne-control-01:8090
          Properties:
          - Name: ocne-control-01:8090
            Value: up
          - Name: swap
            Value: "off"
          - Name: container-images
    ..
    .. 
          - Name: kubecfg
            Value: file exist
          - Name: module
            Properties:
            - Name: br_netfilter
              Value: loaded
            - Name: conntrack
              Value: loaded
          - Name: networking
            Value: active
          - Name: firewall
            Properties:
            - Name: 10249/tcp
              Value: closed
            - Name: 10250/tcp
              Value: closed
            - Name: 10255/tcp
              Value: closed
            - Name: 8472/udp
              Value: closed
            - Name: 9100/tcp
              Value: closed

    At this stage, if your Oracle Cloud Native Environment install does not have either of the Istio or Oracle Cloud Infrastructure Cloud Controller Manager Module installed, then no further steps are required.

    However, because this install has the Oracle Cloud Infrastructure Cloud Controller Manager Module installed and configured, the next steps are required in order to ensure that the Oracle Cloud Infrastructure Cloud Controller Manager Module continues working correctly.

Summary

This concludes the walkthrough demonstrating how to upgrade an Oracle Cloud Native Environment install from Release 1.7 to Release 1.8.

For More Information

SSR