Run a Compact Oracle Cloud Native Environment

0
0
Send lab feedback

Run a Compact Oracle Cloud Native Environment

Introduction

The 1.5 release of Oracle Cloud Native Environment introduced the compact deployment, allowing non-system Kubernetes workloads to run on control plane nodes. When setting --compact to true, the Platform API Server does not taint the control plane node(s). This setting allows non-system Kubernetes workloads to be scheduled and run on control plane nodes.

Important: This option must be set to false (the default) for production environments.

During this tutorial, you create, validate, and install a compact Oracle Cloud Native Environment deployment.

Objectives

  • Create a compact deployment
  • Verify deployment with a test project

Prerequisites

A single Oracle Linux instance provisioned with the following:

  • a non-root user with sudo permissions

Set up Lab Environment

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

Information: The free lab environment deploys Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 8-10 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.

  1. If not already connected, open a terminal and connect via ssh to the ocne-node01 system.

    ssh oracle@<ip_address_of_ol_node>

    Note: (Optional) The free lab environment automatically deploys an SELinux policy workaround that addresses the following issue reported in the audit logs. This error may prevent proper Oracle Cloud Native Environment deployments in some situations. You can examine the system log using sudo journalctl -xe and the keyboard arrows to navigate the logs to determine if this is occurring in your environment. It is not required to run this in the provided free lab environment.

    Jun 16 19:03:51 ocne-node01 setroubleshoot[125840]: SELinux is preventing iptables from ioctl access on the directory /sys/fs/>

    Workaround:

    echo '(allow iptables_t cgroup_t (dir (ioctl)))' | sudo tee /root/local_iptables.cil
    sudo semodule -i /root/local_iptables.cil

Create a Platform CLI Configuration File

Using a YAML-based configuration file simplifies creating and managing environments and modules. The configuration file includes details about the environments and modules you want to create, thus saving repeated entering of Platform CLI command options.

When using a configuration file, pass the --config-file option to any Platform CLI command as it is a global command option. The --config-file option disables other command-line options, except for the --force option. The olcnectl command ignores other command-line options and only uses the values in the configuration file.

The free lab environment generates a configuration file to use in this exercise. For information on manually creating a configuration file, see Using a Configuration File in the Oracle documentation.

  1. View the configuration file contents.

    cat ~/myenvironment.yaml
    • The environment-name option sets the environment where deploying modules.
    • The module: option deploys the kubernetes module with a cluster name: of mycluster.
    • The compact: option sets the deployment to happen on the control plane node.

    Note: The worker-nodes: option does not exist in this configuration file. Removing this option from the configuration file is required when using compact: as the modules expect to deploy only to a single control plane node.

Create the Environment and Kubernetes Module

  1. Create the environment.

    cd ~
    olcnectl environment create --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-node01 ~]$ olcnectl environment create --config-file myenvironment.yaml
    Environment myenvironment created.
  2. Create the Kubernetes module.

    olcnectl module create --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-node01 ~]$ olcnectl module create --config-file myenvironment.yaml
    Modules created successfully.
  3. Validate the Kubernetes module.

    olcnectl module validate --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-node01 ~]$ olcnectl module validate --config-file myenvironment.yaml
    Validation of module mycluster succeeded.

    There are no validation errors in this example. If any errors exist, the command output provides the necessary syntax to fix and pass the validation check.

  4. Install the Kubernetes module.

    olcnectl module install --config-file myenvironment.yaml

    The deployment of Kubernetes to the nodes may take several minutes to complete.

    Example Output:

    [oracle@ocne-node01 ~]$ olcnectl module install --config-file myenvironment.yaml
    Modules installed successfully.

    Note: If the module install fails, it may report back error: Module "mycluster" never reached a healthy state. Last state: externalip-webhook is not healthy. This error is likely due to an incorrect configuration in the environment or certificates and causes the pod to have an incorrect taint status.

    Explaining taints in any detail is beyond the scope of this tutorial. However, at the simplest level, a node's taint determines where any specific node attracts or repels pods to the node in question. Because this deployment only has one node, it needs to have an untainted state to successfully allow any Pod to deploy onto it. For more information on taints please refer to taint and toleration in the Kubernetes upstream documentation.

  5. Validate the deployment of the Kubernetes module.

    olcnectl module instances --config-file myenvironment.yaml

    Example Output:

    [oracle@ocne-node01 ~]$ olcnectl module instances --config-file myenvironment.yaml
    INSTANCE       	MODULE    	STATE    
    10.0.0.140:8090	node      	installed
    mycluster      	kubernetes	installed

Set Up the Kubernetes Command-Line Tool and Validate the Kubernetes Environment

  1. Set up the kubectl command.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=$HOME/.kube/config
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
  2. Verify kubectl works.

    kubectl get nodes

    Example Output:

    [oracle@ocne-node01 ~]$ kubectl get nodes
    NAME          STATUS   ROLES    AGE     VERSION
    ocne-node01   Ready    <none>   2m38s   v1.28.3+3.el8
  3. Get a list of pods.

    kubectl get pods -A

    Example Output:

    [oracle@ocne-node01 ~]$ kubectl get pods -A
    NAMESPACE                      NAME                                             READY   STATUS    RESTARTS       AGE
    externalip-validation-system   externalip-validation-webhook-7f859947f5-cgv94   1/1     Running   0              2m32s
    kube-system                    coredns-5d7b65fffd-dqj6c                         1/1     Running   0              2m32s
    kube-system                    coredns-5d7b65fffd-wqxs7                         1/1     Running   0              2m32s
    kube-system                    etcd-ocne-node01                                 1/1     Running   0              3m18s
    kube-system                    kube-apiserver-ocne-node01                       1/1     Running   0              3m16s
    kube-system                    kube-controller-manager-ocne-node01              1/1     Running   1 (3m3s ago)   3m16s
    kube-system                    kube-flannel-ds-lzh9w                            1/1     Running   0              2m32s
    kube-system                    kube-proxy-x62fl                                 1/1     Running   0              2m32s
    kube-system                    kube-scheduler-ocne-node01                       1/1     Running   0              3m16s
    kubernetes-dashboard           kubernetes-dashboard-547d4b479c-7mmcw            1/1     Running   0              2m32s
    ocne-modules                   verrazzano-module-operator-9bb46ff99-7xwbd       1/1     Running   0              2m32s

The output confirms that everything is up and running correctly and ready for you to deploy your locally developed applications to Oracle Cloud Native Environment for testing.

Summary

Getting a compact version of Oracle Cloud Native Environment installed is only the start, and it’s a helpful tool to aid with local testing and development. For more examples, check out Run Kubernetes on Oracle Linux .

For More Information

SSR