Use Quick Install to Deploy Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for the development and management of cloud native applications. The Kubernetes module is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers.
Oracle Cloud Native Environment Release 1.5.7 introduced the ability to use the Oracle Cloud Native Environment Platform CLI to perform a quick installation of itself. This is done using the olcnectl provision command on an installation host (the operator node). The olcnectl provision command can perform the following operations on the target nodes:
- Generate CA Certificates.
- Copy the CA Certificates to each node.
- Set up the operating system on each node, including the network ports.
- Install the Oracle Cloud Native Environment software packages on each node.
- Start the Oracle Cloud Native Environment platform services (Platform API Server and Platform Agent).
- Create an Oracle Cloud Native Environment.
- Create, validate and install a Kubernetes module, which creates the Kubernetes cluster.
- Set up the Platform certificates to
~./olcne
on the operator node to access the environment using the olcnectl command.
This tutorial describes how to perform a quick installation using the simplest series of steps possible to get Oracle Cloud Native Environment and a Kubernetes cluster installed. This tutorial uses private CA Certificates. It is recommended for a production environment that you use your own CA Certificates.
More complex install topologies can be achieved by writing your own Oracle Cloud Native Environment configuration file and then passing it to the olcnectl provision command using the --config-file option. For more information on the syntax options provided by the olcnectl provision command and also on how to write a configuration file, please refer to the Platform Command-Line Interface guide.
Objectives
This lab demonstrates how to:
- Perform a quick installation of Oracle Cloud Native Environment
- Set up a host with the Platform CLI (olcnectl) on the operator node
- Use the olcnectl provision command to perform a quick installation
- Install Oracle Cloud Native Environment Release 1.8 on a 3-node cluster
- Verify the install completed successfully
Prerequisites
The host systems to perform the steps in this tutorial are listed in this section. To be successful requires:
3 Oracle Linux systems to use as:
- Operator node (ocne-operator)
- Kubernetes control plane node (ocne-control)
- Kubernetes worker node (ocne-worker)
Each system should have a minimum of the following installed:
- Latest Oracle Linux (x86_64) installed and running the Unbreakable Enterprise Kernel Release 7 (UEK R7)
This environment is pre-configured with:
- An oracle user account (used during the install) with sudo access
- Key-based SSH, also known as passwordless SSH, between the hosts
Set Up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
This lab involves multiple systems, each of which require different steps performed. The majority of the operations will be initiated from the ocne-operator node, so it is recommended to open a terminal window to this node as a minimum.
Open a terminal and connect via ssh to each node.
ssh oracle@<ip_address_of_ol_node>
Note When a step says "(On all nodes)" in the lab, perform those actions on ocne-operator, ocne-control and ocne-worker. The reason for this approach is to avoid repetition, because the required action will be identical on each node.
Setting up the Install Host (Operator node) on Oracle Linux
These steps configure the Oracle Linux host (operator node) so it can be used for the quick installation of Oracle Cloud Native Environment.
(On ocne-operator) Install the
oracle-olcne-release-el8
release package.sudo dnf -y install oracle-olcne-release-el8
(On ocne-operator) Enable the current Oracle Cloud Native Environment repository.
sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7
(On ocne-operator) Disable all previous repository versions.
sudo dnf config-manager --disable ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_developer
(On ocne-operator) Install the olcnectl software package.
sudo dnf -y install olcnectl
Perform a Quick Install
The following steps describe the fastest method to set up a basic deployment of Oracle Cloud Native Environment and install a Kubernetes cluster. It requires a minimum of three nodes, which are:
- Operator Node Uses the Platform CLI (olcnectl) to perform the installation, as well as hosting the Platform API Server.
- Kubernetes control plane Requires at least one node to use as a Kubernetes control plane node.
- Kuberbetes worker Requires at least one node to use as a Kubernetes worker node.
(On ocne-operator) Use the olcnectl provision command to begin the installation.
olcnectl provision \ --api-server ocne-operator \ --control-plane-nodes ocne-control \ --worker-nodes ocne-worker \ --environment-name myenvironment \ --name mycluster --selinux enforcing
Important Note: This operation can take 10-15 minutes to complete and there will be no visible indication that anything is occuring until it finishes.
Where:
- --api-server - the FQDN of the node on which the Platform API should be set up.
- --control-plane-nodes - the FQDN of the nodes which will be set up with the Platform Agent and assigned the Kubernetes control plane role. If more than one node is present it should be a comma separated list.
- --worker-nodes - the FQDN of the nodes which will be set up with the Platform Agent and assigned the Kubernetes worker role. If more than one node is present it should be a comma separated list.
- --environment-name - used to identify the environment.
- --name - used to set the name of the Kubernetes module.
- --selinux enforcing - used to set SELinux to enforcing or permissive mode (the default is permissive)
Note When executing this command, a prompt is displayed that lists the changes to be made to the hosts and asks for confirmation. To avoid this prompt, use the
--yes
option. This option sets the olcnectl provision command to assume that the answer to any confirmation prompt is affirmative (yes).Important: The previous command syntax (
--master-nodes
) has been deprecated. If the old syntax isused during the install the following message will appear in the output:Flag --master-nodes has been deprecated, Please migrate to --control-plane-nodes.
Example Output: This shows example output when the
--yes
command switch is not used:[oracle@ocne-operator ~]$ olcnectl provision \ > --api-server ocne-operator \ > --control-plane-nodes ocne-control \ > --worker-nodes ocne-worker \ > --environment-name myenvironment \ > --name mycluster INFO[02/02/24 14:01:20] Generating certificate authority INFO[02/02/24 14:01:21] Generating certificate for ocne-operator INFO[02/02/24 14:01:21] Generating certificate for ocne-control INFO[02/02/24 14:01:21] Generating certificate for ocne-worker INFO[02/02/24 14:01:21] Creating directory "/etc/olcne/certificates/" on ocne-operator INFO[02/02/24 14:01:21] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-operator INFO[02/02/24 14:01:21] Copying local file at "certificates/ocne-operator/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-operator INFO[02/02/24 14:01:21] Copying local file at "certificates/ocne-operator/node.key" to "/etc/olcne/certificates/node.key" on ocne-operator INFO[02/02/24 14:01:21] Creating directory "/etc/olcne/certificates/" on ocne-control The authenticity of host 'ocne-control (10.0.0.151)' can't be established. ECDSA key fingerprint is SHA256:k4Pjg4YEochdHGTv56IcyKn287gxA4XiHrs4tJSZK7Y. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes INFO[02/02/24 14:01:24] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-control INFO[02/02/24 14:01:25] Copying local file at "certificates/ocne-control/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-control INFO[02/02/24 14:01:25] Copying local file at "certificates/ocne-control/node.key" to "/etc/olcne/certificates/node.key" on ocne-control INFO[02/02/24 14:01:25] Creating directory "/etc/olcne/certificates/" on ocne-worker The authenticity of host 'ocne-worker (10.0.0.152)' can't be established. ECDSA key fingerprint is SHA256:FhyssLMlAuQEyMyYg43jz9iO8mHfXQ7eeifmaeKLhkE. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes INFO[02/02/24 14:01:42] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-worker INFO[02/02/24 14:01:42] Copying local file at "certificates/ocne-worker/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-worker INFO[02/02/24 14:01:42] Copying local file at "certificates/ocne-worker/node.key" to "/etc/olcne/certificates/node.key" on ocne-worker ? Apply api-server configuration on ocne-operator: * Install oracle-olcne-release * Enable olcne16 repo * Install API Server Add firewall port 8091/tcp Proceed? yes/no(default) yes ? Apply control-plane configuration on ocne-control: * Install oracle-olcne-release * Enable olcne16 repo * Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp * Disable swap * Load br_netfilter module * Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 * Set SELinux to enforcing * Install and enable olcne-agent Proceed? yes/no(default) yes ? Apply worker configuration on ocne-worker: * Install oracle-olcne-release * Enable olcne16 repo * Configure firewall rule: Add interface cni0 to trusted zone Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp * Disable swap * Load br_netfilter module * Load Bridge Tunable Parameters: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 * Set SELinux to enforcing * Install and enable olcne-agent Proceed? yes/no(default) yes Environment myenvironment created. Modules created successfully. Modules installed successfully. INFO[02/02/24 14:12:08] Kubeconfig for instance "mycluster" in environment "myenvironment" written to kubeconfig.myenvironment.mycluster
(On ocne-operator) The Oracle Cloud Native Environment platform and Kubernetes cluster software is now installed and configured on all of the nodes. This can be confirmed using the following:
olcnectl module instances \ --api-server ocne-operator:8091 \ --environment-name myenvironment
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --api-server ocne-operator:8091 --environment-name myenvironment INSTANCE MODULE STATE ocne-worker:8090 node installed mycluster kubernetes installed ocne-control:8090 node installed
(On ocne-operator) To avoid having to use the
--api-server
flag in future olcnectl commands, run the previous command again by adding the--update-config
flag.olcnectl module instances \ --api-server ocne-operator:8091 \ --environment-name myenvironment \ --update-config
(On ocne-operator) More detailed information related to the deployment can be obtained by using the olcnectl module report command.
olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children \ --format yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children --format yaml Environments: myenvironment: ModuleInstances: - Name: mycluster Properties: - Name: cloud-provider - Name: master:ocne-control:8090 - Name: worker:ocne-worker:8090 - Name: module-operator Value: running - Name: extra-node-operations-update Value: running - Name: status_check Value: healthy - Name: kubectl - Name: kubecfg Value: file exist - Name: podnetworking Value: running - Name: externalip-webhook Value: uninstalled - Name: extra-node-operations - Name: ocne-worker:8090 Properties: ... - Name: module Properties: - Name: br_netfilter Value: loaded - Name: conntrack Value: loaded - Name: networking Value: active - Name: firewall Properties: - Name: 10255/tcp Value: closed - Name: 2381/tcp Value: closed - Name: 6443/tcp Value: closed - Name: 10250/tcp Value: closed - Name: 8472/udp Value: closed - Name: 10257/tcp Value: closed - Name: 10259/tcp Value: closed - Name: 10249/tcp Value: closed - Name: 9100/tcp Value: closed - Name: connectivity - Name: selinux Value: enforcing
Note: It is possible to alter the output to return this output in a Table format. However this requires that the Terminal application's Encoding be set to
UTF-8
(Set the following in the Terminal application's menu:Terminal -> Set Encoding -> Unicode -> UTF-8
). Then run the command again without the--format yaml
option.olcnectl module report \ --environment-name myenvironment \ --name mycluster \ --children
Example Output:
[oracle@ocne-operator ~]$ olcnectl module report --environment-name myenvironment --name mycluster --children ╭─────────────────────────────────────────────────────────────────────┬─────────────────────────╮ │ myenvironment │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ mycluster │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ Property │ Current Value │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ kubecfg │ file exist │ │ podnetworking │ running │ │ module-operator │ running │ │ extra-node-operations │ │ │ extra-node-operations-update │ running │ │ worker:ocne-worker:8090 │ │ │ externalip-webhook │ uninstalled │ │ status_check │ healthy │ │ kubectl │ │ │ cloud-provider │ │ │ master:ocne-control:8090 │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ ocne-control:8090 │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ ... ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ swap │ off │ │ package │ │ ├─────────────────────────────────────────────────────────────────────┼─────────────────────────┤ │ helm │ 3.12.0-4.el8 │ │ kubeadm │ 1.28.3-3.el8 │ │ kubectl │ 1.28.3-3.el8 │ │ kubelet │ 1.28.3-3.el8 │ ╰─────────────────────────────────────────────────────────────────────┴─────────────────────────╯
Set up kubectl
(On ocne-control) Setup the kubectl command.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
(On ocne-control) Verify kubectl works.
kubectl get nodes
**Example Outptut:
[oracle@ocne-control ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control Ready control-plane 74m v1.28.3+3.el8 ocne-worker Ready <none> 73m v1.28.3+3.el8
or
kubectl get pods --all-namespaces
Example Output:
[oracle@ocne-control ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5d7b65fffd-4w5vd 1/1 Running 0 74m kube-system coredns-5d7b65fffd-hmqlw 1/1 Running 0 74m kube-system etcd-ocne-control 1/1 Running 0 74m kube-system kube-apiserver-ocne-control 1/1 Running 0 74m kube-system kube-controller-manager-ocne-control 1/1 Running 0 74m kube-system kube-flannel-ds-95z2k 1/1 Running 1 (73m ago) 74m kube-system kube-flannel-ds-z8qsw 1/1 Running 0 74m kube-system kube-proxy-5pb6k 1/1 Running 0 74m kube-system kube-proxy-rw44r 1/1 Running 0 74m kube-system kube-scheduler-ocne-control 1/1 Running 0 74m kubernetes-dashboard kubernetes-dashboard-547d4b479c-gxw55 1/1 Running 0 74m ocne-modules verrazzano-module-operator-9bb46ff99-pv5st 1/1 Running 1 (73m ago) 74m
This confirms that Oracle Cloud Native Environment is setup and running on the three nodes.