Run a Compact Oracle Cloud Native Environment
The 1.5 release of Oracle Cloud Native Environment introduced the compact deployment, allowing non-system Kubernetes workloads to run on control plane nodes. When setting
true, the Platform API Server does not taint the control plane node(s). This setting allows non-system Kubernetes workloads to be scheduled and run on control plane nodes.
Important: This option must be set to false (the default) for production environments.
During this tutorial, you create, validate, and install a compact Oracle Cloud Native Environment deployment.
- Create a compact deployment
- Verify deployment with a test project
A single Oracle Linux 8 or later system provisioned with the following:
- a non-root user with sudo permissions
Oracle Support Disclaimer
Oracle does not provide technical support for the sequence of steps in the following instructions because these steps refer to a deployment topology that is NOT intended for use in Production. This tutorial provides optional instructions as a convenience only to help facilitate developers testing services locally during development.
Oracle’s supported method for the development and management of cloud-native applications is Oracle Cloud Native Environment. For more information, see https://docs.oracle.com/en/operating-systems/olcne/ .
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 8-10 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
If not already connected, open a terminal and connect via ssh to the ocne-node01 system.
Note: (Optional) The free lab environment automatically deploys an SELinux policy workaround that addresses the following issue reported in the audit logs. This error may prevent proper Oracle Cloud Native Environment deployments in some situations. You can examine the system log using
sudo journalctl -xeand the keyboard arrows to navigate the logs to determine if this is occurring in your environment. It is not required to run this in the provided free lab environment.
Jun 16 19:03:51 ocne-node01 setroubleshoot: SELinux is preventing iptables from ioctl access on the directory /sys/fs/>
echo '(allow iptables_t cgroup_t (dir (ioctl)))' | sudo tee /root/local_iptables.cil sudo semodule -i /root/local_iptables.cil
Create a Platform CLI Configuration File
Using a YAML-based configuration file simplifies creating and managing environments and modules. The configuration file includes details about the environments and modules you want to create, thus saving repeated entering of Platform CLI command options.
When using a configuration file, pass the
--config-file option to any Platform CLI command as it is a global command option. The
--config-file option disables other command-line options, except for the
--force option. The
olcnectl command ignores other command-line options and only uses the values in the configuration file.
The free lab environment generates a configuration file to use in this exercise. For information on manually creating a configuration file, see Using a Configuration File in the Oracle documentation.
View the configuration file contents.
environment-nameoption sets the environment where deploying modules.
module:option deploys the
kubernetesmodule with a cluster
compact:option sets the deployment to happen on the control plane node.
worker-nodes:option does not exist in this configuration file. Removing this option from the configuration file is required when using
compact:as the modules expect to deploy only to a single control plane node.
Create the Environment and Kubernetes Module
Create the environment.
cd ~ olcnectl environment create --config-file myenvironment.yaml
[oracle@ocne-node01 ~]$ olcnectl environment create --config-file myenvironment.yaml Environment myenvironment created.
Create the Kubernetes module.
olcnectl module create --config-file myenvironment.yaml
[oracle@ocne-node01 ~]$ olcnectl module create --config-file myenvironment.yaml Modules created successfully.
Validate the Kubernetes module.
olcnectl module validate --config-file myenvironment.yaml
[oracle@ocne-node01 ~]$ olcnectl module validate --config-file myenvironment.yaml Validation of module mycluster succeeded.
There are no validation errors in this example. If any errors exist, the command output provides the necessary syntax to fix and pass the validation check.
Install the Kubernetes module.
olcnectl module install --config-file myenvironment.yaml
The deployment of Kubernetes to the nodes may take several minutes to complete.
[oracle@ocne-node01 ~]$ olcnectl module install --config-file myenvironment.yaml Modules installed successfully.
Note: If the module install fails, it may report back error: Module "mycluster" never reached a healthy state. Last state: externalip-webhook is not healthy. This error is likely due to an incorrect configuration in the environment or certificates and causes the pod to have an incorrect taint status.
Explaining taints in any detail is beyond the scope of this tutorial. However, at the simplest level, a node's taint determines where any specific node attracts or repels pods to the node in question. Because this deployment only has one node, it needs to have an untainted state to successfully allow any Pod to deploy onto it. For more information on taints please refer to taint and toleration in the Kubernetes upstream documentation.
Validate the deployment of the Kubernetes module.
olcnectl module instances --config-file myenvironment.yaml
[oracle@ocne-node01 ~]$ olcnectl module instances --config-file myenvironment.yaml INSTANCE MODULE STATE 10.0.0.140:8090 node installed mycluster kubernetes installed
Set Up the Kubernetes Command-Line Tool and Validate the Kubernetes Environment
Set up the
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
kubectl get nodes
[oracle@ocne-node01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-node01 Ready <none> 5m30s v1.22.8+1.el8
Get a list of pods.
kubectl get pods -A
[oracle@ocne-node01 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validation-system externalip-validation-webhook-7988bff847-bkvxd 1/1 Running 0 5m47s kube-system coredns-7cbc77dbc7-4rghr 1/1 Running 0 5m47s kube-system coredns-7cbc77dbc7-lp7x6 1/1 Running 0 5m47s kube-system etcd-ocne-node01 1/1 Running 0 6m7s kube-system kube-apiserver-ocne-node01 1/1 Running 0 6m7s kube-system kube-controller-manager-ocne-node01 1/1 Running 0 6m7s kube-system kube-flannel-ds-5spkp 1/1 Running 0 5m47s kube-system kube-proxy-cq27f 1/1 Running 0 5m47s kube-system kube-scheduler-ocne-node01 1/1 Running 0 6m7s kubernetes-dashboard kubernetes-dashboard-5d5d4947b5-n2vcd 1/1 Running 0 5m47s
The output confirms that everything is up and running correctly and ready for you to deploy your locally developed applications to Oracle Cloud Native Environment for testing.
Getting a compact version of Oracle Cloud Native Environment installed is only the start, and it’s a helpful tool to aid with local testing and development. For more examples, check out Run Kubernetes on Oracle Linux .