Run a Compact Oracle Cloud Native Environment

0
0
Send lab feedback

Run a Compact Oracle Cloud Native Environment

Introduction

Oracle Cloud Native Environment Release 1.5.0 introduced the compact deployment, allowing non-system Kubernetes workloads to run on control plane nodes. When setting --compact to true, the Platform API Server does not taint the control plane node(s). This setting allows Kubernetes to schedule non-system workloads and run on control plane nodes.

Important: Production environments require setting this option to the default value of false.

During this tutorial, you create, validate, and install a compact Oracle Cloud Native Environment deployment.

Objectives

  • Create a compact deployment
  • Verify deployment with a test project

Prerequisites

A single Oracle Linux instance installed and configured with:

  • An Oracle user account (used during the installation) with sudo access
  • Key-based SSH, also known as password-less SSH, between the hosts
  • Installation of Oracle Cloud Native Environment

Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
  3. Change into the working directory.

    cd linux-virt-labs/ocne
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yml
  5. Update the Oracle Cloud Native Environment configuration.

    cat << EOF | tee instances.yml > /dev/null
    compute_instances:
      1:
        instance_name: "ocne-compact"
        type: "operator"
    EOF
  6. Deploy the lab environment.

    ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e ocne_type=compact -e "@instances.yml"

    The free lab environment requires the extra variable local_python_interpreter, which sets ansible_python_interpreter for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.

Create a Platform CLI Configuration File

Using a YAML-based configuration file simplifies creating and managing environments and modules. The configuration file includes details about the environments and modules you want to create, thus saving repeated entering of Platform CLI command options.

When using a configuration file, pass the --config-file option to any Platform CLI command, as it is a global command option. The --config-file option disables other command-line options, except for the --force option. The olcnectl command ignores other command-line options and only uses the values in the configuration file.

For information on manually creating a configuration file, see Using a Configuration File in the Oracle documentation.

  1. Open a terminal and connect via SSH to the ocne-compact node.

    ssh oracle@<ip_address_of_node>

    Note: (Optional) The audit logs on some Oracle Linux instances report the following SELinux policy issue. This error may prevent proper Oracle Cloud Native Environment deployments in some situations. You can examine the system log using sudo journalctl -xe and the keyboard arrows to navigate the logs and determine if this is occurring in your environment. It is not required to run this if you are not experiencing the error.

    Jun 16 19:03:51 ocne-node01 setroubleshoot[125840]: SELinux is preventing iptables from ioctl access on the directory /sys/fs/>

    Workaround:

    echo '(allow iptables_t cgroup_t (dir (ioctl)))' | sudo tee /root/local_iptables.cil
    sudo semodule -i /root/local_iptables.cil
  2. Create the configuration file.

    cat << EOF | tee ~/myenvironment.yaml > /dev/null
    environments:
      - environment-name: myenvironment
        globals:
          api-server: 127.0.0.1:8091
          secret-manager-type: file
          olcne-ca-path: /etc/olcne/certificates/ca.cert
          olcne-node-cert-path: /etc/olcne/certificates/node.cert
          olcne-node-key-path: /etc/olcne/certificates/node.key
          update-config: true
        modules:
          - module: kubernetes
            name: mycluster
            args:
              container-registry: container-registry.oracle.com/olcne
              control-plane-nodes: 
                - ocne-compact:8090
              selinux: enforcing
              restrict-service-externalip: false
              restrict-service-externalip-ca-cert: /tmp/certificates/ca/ca.cert
              restrict-service-externalip-tls-cert: /tmp/certificates/restrict_external_ip/node.cert
              restrict-service-externalip-tls-key: /tmp/certificates/restrict_external_ip/node.key
              compact: true
    EOF
    • The environment-name option sets the environment where deploying modules
    • The module: option deploys the kubernetes module with a cluster name: of mycluster
    • The compact: option sets the deployment to happen on the control plane node

    Note: The worker-nodes: option does not exist in this configuration file. Removing it is required when using compact: true, as the modules expect to deploy only to a single control plane node.

Create the Environment and Kubernetes Module

  1. Create the environment.

    cd ~
    olcnectl environment create --config-file myenvironment.yaml
  2. Create the Kubernetes module.

    olcnectl module create --config-file myenvironment.yaml
  3. Validate the Kubernetes module.

    olcnectl module validate --config-file myenvironment.yaml

    This example does not contain validation errors. If errors do exist, the command output provides the necessary syntax to fix them and pass the validation check.

  4. Install the Kubernetes module.

    olcnectl module install --config-file myenvironment.yaml

    The deployment of Kubernetes to the nodes may take several minutes to complete.

    Note: If the module install fails, it may report back error: Module "mycluster" never reached a healthy state. Last state: externalip-webhook is not healthy. This error is likely due to an incorrect configuration in the environment or certificates, which causes the pod to have an incorrect taint status.

    Explaining taints in any detail is beyond the scope of this tutorial. However, at the simplest level, a node's taint determines where any specific node attracts or repels pods to the node in question. Because this deployment only has one node, it must have an untainted state to allow any Pod to deploy onto it successfully. For more information on taints please refer to taint and toleration in the Kubernetes upstream documentation.

  5. Validate the deployment of the Kubernetes module.

    olcnectl module instances --config-file myenvironment.yaml

    Example Output:

    INSTANCE       	MODULE    	STATE    
    10.0.0.140:8090	node      	installed
    mycluster      	kubernetes	installed

Set Up the Kubernetes Command-Line Tool and Validate the Kubernetes Environment

  1. Set up the kubectl command.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
  2. Verify that kubectl works.

    kubectl get nodes

    Example Output:

    NAME          STATUS   ROLES    AGE     VERSION
    ocne-node01   Ready    <none>   2m38s   v1.28.3+3.el8
  3. Get a list of pods.

    kubectl get pods -A

    Example Output:

    NAMESPACE                      NAME                                             READY   STATUS    RESTARTS       AGE
    externalip-validation-system   externalip-validation-webhook-7f859947f5-cgv94   1/1     Running   0              2m32s
    kube-system                    coredns-5d7b65fffd-dqj6c                         1/1     Running   0              2m32s
    kube-system                    coredns-5d7b65fffd-wqxs7                         1/1     Running   0              2m32s
    kube-system                    etcd-ocne-node01                                 1/1     Running   0              3m18s
    kube-system                    kube-apiserver-ocne-node01                       1/1     Running   0              3m16s
    kube-system                    kube-controller-manager-ocne-node01              1/1     Running   1 (3m3s ago)   3m16s
    kube-system                    kube-flannel-ds-lzh9w                            1/1     Running   0              2m32s
    kube-system                    kube-proxy-x62fl                                 1/1     Running   0              2m32s
    kube-system                    kube-scheduler-ocne-node01                       1/1     Running   0              3m16s
    kubernetes-dashboard           kubernetes-dashboard-547d4b479c-7mmcw            1/1     Running   0              2m32s
    ocne-modules                   verrazzano-module-operator-9bb46ff99-7xwbd       1/1     Running   0              2m32s

The output confirms that everything is up and running correctly and ready for you to deploy your locally developed applications to Oracle Cloud Native Environment for testing.

Summary

Getting a compact version of Oracle Cloud Native Environment installed is only the start. It’s a helpful tool for local testing and development. For more examples, check out Run Kubernetes on Oracle Linux .

For More Information

SSR