Use Quick Install to Deploy Oracle Cloud Native Environment

0
0
Send lab feedback

Use Quick Install to Deploy Oracle Cloud Native Environment

Introduction

Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.

Oracle Cloud Native Environment Release 1.5.7 introduced the ability to use the Oracle Cloud Native Environment Platform CLI to perform a quick installation. This installation type uses the olcnectl provision command on the operator node. Running this command, you can perform the following operations on the target nodes:

  • Generate CA Certificates
  • Copy the CA Certificates to each node
  • Set up the operating system on each node, including the network ports
  • Install the Oracle Cloud Native Environment software packages on each node
  • Start the Oracle Cloud Native Environment platform services (Platform API Server and Platform Agent)
  • Create an Oracle Cloud Native Environment
  • Create, validate, and install a Kubernetes module, which creates the Kubernetes cluster
  • Set up the Platform certificates to ~./olcne on the operator node to access the environment using the olcnectl command.

This tutorial describes how to perform a quick installation using the most straightforward steps to install Oracle Cloud Native Environment and a Kubernetes cluster using private CA Certificates. Oracle recommends you use your own CA Certificates for a production environment.

You can achieve more complex topologies by writing an Oracle Cloud Native Environment configuration file and passing it to the olcnectl provision command using the --config-file option. For more information on the syntax options provided by the olcnectl provision command and also on how to write a configuration file, please refer to the Platform Command-Line Interface guide.

Objectives

This tutorial demonstrates how to:

  • Set up the operator with the Platform CLI
  • Use the olcnectl provision command to perform a quick installation
  • Verify the install completes successfully

Prerequisites

  • Minimum of 3 Oracle Linux instances

  • Each system should have Oracle Linux installed and configured with:

    • An Oracle user account (used during the installation) with sudo access
    • Key-based SSH, also known as password-less SSH, between the hosts

Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
  3. Change into the working directory.

    cd linux-virt-labs/ocne
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yml
  5. Deploy the lab environment.

    ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e ocne_type=none

    The free lab environment requires the extra variable local_python_interpreter, which sets ansible_python_interpreter for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.

Set Up the Operator Node on Oracle Linux

These steps configure the ocne-operator node to install Oracle Cloud Native Environment quickly.

  1. Open a terminal and connect via SSH to the ocne-operator node.

    ssh oracle@<ip_address_of_node>
  2. Install the Oracle Cloud Native Environment release package.

    sudo dnf -y install oracle-olcne-release-el8
  3. Enable the current Oracle Cloud Native Environment repository.

    sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
  4. Disable all previous repository versions.

    sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_developer
  5. Install the Platform CLI.

    sudo dnf -y install olcnectl

Perform a Quick Install

The following steps describe the fastest method to set up a basic deployment of Oracle Cloud Native Environment and install a Kubernetes cluster. It requires a minimum of three node types, which are:

  • Operator: Uses the Platform CLI to install and host the Platform API Server
  • Kubernetes Control Plane: Requires at least one node to use as a Kubernetes control plane node
  • Kubernetes Worker: Requires at least one node to use as a Kubernetes worker node
  1. Begin the installation.

    olcnectl provision \
    --api-server ocne-operator \
    --control-plane-nodes ocne-control-01 \
    --worker-nodes ocne-worker-01,ocne-worker-02 \
    --environment-name myenvironment \
    --name mycluster \
    --selinux enforcing \
    --yes

    Important Note: This operation can take 10-15 minutes to complete, and there will be no visible indication of anything occurring until it finishes.

    Where:

    • --api-server: the FQDN of the node where the installation sets up the Platform API
    • --control-plane-nodes: the FQDN, comma-separated list of the nodes that host the Platform Agent and get assigned the Kubernetes control plane role
    • --worker-nodes: the FQDN, comma-separated list of the nodes that host the Platform Agent and get assigned the Kubernetes worker role
    • --environment-name: identifies the environment
    • --name: sets the name of the Kubernetes module
    • --selinux enforcing: sets SELinux to enforcing (default) or permissive mode

    Note After executing this command, a prompt lists the changes it'll make to the hosts and asks for confirmation. To avoid this prompt, use the --yes option.

    Important: In previous releases, the command syntax uses the --master-nodes option rather than --control-plane-nodes. The older option is deprecated and throws the following message if used:

    Flag --master-nodes has been deprecated, Please migrate to --control-plane-nodes.

  2. Confirm the cluster installation.

    olcnectl module instances \
    --api-server ocne-operator:8091 \
    --environment-name myenvironment

    The output shows the nodes and Kubernetes module with a STATE of installed.

  3. Add the'-- update-config' option to avoid using the --api-server flag in future Platform CLI commands.

    olcnectl module instances \
    --api-server ocne-operator:8091 \
    --environment-name myenvironment \
    --update-config
  4. Get more detailed information related to the deployment in YAML format.

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children \
    --format yaml

    Example Output:

    Environments:
      myenvironment:
        ModuleInstances:
        - Name: mycluster
          Properties:
          - Name: cloud-provider
          - Name: master:ocne-control:8090
          - Name: worker:ocne-worker:8090
          - Name: module-operator
            Value: running
          - Name: extra-node-operations-update
            Value: running
          - Name: status_check
            Value: healthy
          - Name: kubectl
          - Name: kubecfg
            Value: file exist
          - Name: podnetworking
            Value: running
          - Name: externalip-webhook
            Value: uninstalled
          - Name: extra-node-operations
        - Name: ocne-worker:8090
          Properties:
    
    ...
    
         - Name: module
           Properties:
            - Name: br_netfilter
              Value: loaded
            - Name: conntrack
              Value: loaded
          - Name: networking
            Value: active
          - Name: firewall
            Properties:
            - Name: 10255/tcp
              Value: closed
            - Name: 2381/tcp
              Value: closed
            - Name: 6443/tcp
              Value: closed
            - Name: 10250/tcp
              Value: closed
            - Name: 8472/udp
              Value: closed
            - Name: 10257/tcp
              Value: closed
            - Name: 10259/tcp
              Value: closed
            - Name: 10249/tcp
              Value: closed
            - Name: 9100/tcp
              Value: closed
          - Name: connectivity
          - Name: selinux
            Value: enforcing
  5. Get more detailed information related to the deployment in table format.

    It is possible to alter the output to return in a table format. However, this requires setting the Oracle Linux Terminal application's encoding to UTF-8 by accessing its menu and clicking: Terminal -> Set Encoding -> Unicode -> UTF-8. Then, rerun the command without the --format yaml option.

    olcnectl module report \
    --environment-name myenvironment \
    --name mycluster \
    --children

    Example Output:

    ╭─────────────────────────────────────────────────────────────────────┬─────────────────────────╮
    │ myenvironment                                                       │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ mycluster                                                           │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ Property                                                            │ Current Value           │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ kubecfg                                                             │ file exist              │
    │ podnetworking                                                       │ running                 │
    │ module-operator                                                     │ running                 │
    │ extra-node-operations                                               │                         │
    │ extra-node-operations-update                                        │ running                 │
    │ worker:ocne-worker:8090                                             │                         │
    │ externalip-webhook                                                  │ uninstalled             │
    │ status_check                                                        │ healthy                 │
    │ kubectl                                                             │                         │
    │ cloud-provider                                                      │                         │
    │ master:ocne-control:8090                                            │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ ocne-control:8090                                                   │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    ...
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ swap                                                                │ off                     │
    │ package                                                             │                         │
    ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤
    │ helm                                                                │ 3.12.0-4.el8            │
    │ kubeadm                                                             │ 1.28.3-3.el8            │
    │ kubectl                                                             │ 1.28.3-3.el8            │
    │ kubelet                                                             │ 1.28.3-3.el8            │
    ╰─────────────────────────────────────────────────────────────────────┴─────────────────────────╯

Set up kubectl

  1. Set up the kubectl command on the control plane node.

    ssh ocne-control-01 "mkdir -p $HOME/.kube; \
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; \
    sudo chown $(id -u):$(id -g) $HOME/.kube/config; \
    export KUBECONFIG=$HOME/.kube/config; \
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc"
  2. Verify that kubectl works.

    ssh ocne-control-01 "kubectl get nodes"

    The output shows that each node in the cluster is ready, along with its current role and version.

    or

    ssh ocne-control-01 "kubectl get pods --all-namespaces"

    The output shows all Pods in a running status.

Summary

This output from kubectl confirms the successful installation of Oracle Cloud Native Environment using the quick installation method.

For More Information

SSR