Deploy Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for the development and management of cloud native applications. The Kubernetes module is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers.
Objectives
This lab demonstrates how to:
- Install Oracle Cloud Native Environment on a 3-node cluster
- Configure x.509 Private CA Certificates
- Configure Oracle Cloud Native Environment on a 3-node cluster
- Verify the install completed successfully
Prerequisites
The host systems to perform the steps in this tutorial are listed in this section. To be successful requires:
3 Oracle Linux systems to use as:
- Operator node (ocne-operator)
- Kubernetes control plane node (ocne-control)
- Kubernetes worker node (ocne-worker)
Each system should have a minimum of the following installed:
- Latest Oracle Linux (x86_64) installed and running the Unbreakable Enterprise Kernel Release 7 (UEK R7)
This environment is pre-configured with the following:
- An Oracle user account with 'sudo' access
- Key-based SSH, also known as passwordless SSH, between the instances
Set Up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
This lab involves multiple systems, each of which require different steps performed. It is recommended to start by opening three terminal windows and connecting to the ocne-operator, ocne-control and ocne-worker nodes. This avoids the need to repeatedly log in and out.
Open a terminal and connect via ssh to each of the three nodes.
ssh oracle@<ip_address_of_ol_node>
Note When a step says "(On all nodes)" in the lab, perform those actions on ocne-operator, ocne-control and ocne-worker. The reason for this approach is to avoid repetition, because the required action will be identical on each node.
(Optional) Update Oracle Linux
(On all nodes) Make sure Oracle Linux is up to date.
sudo dnf -y update
This may take a few minutes to complete; it may save time if this step is executed in parallel on each node separately.
(On all nodes) Reboot and reconnect.
sudo reboot
(On all nodes) After the system reboots, reconnect to the node(s) via ssh.
ssh oracle@<ip_address_of_ol_node>
Install and Enable the Oracle Cloud Native Environment Yum Repository
(On all nodes) Install the yum repository.
sudo dnf -y install oracle-olcne-release-el8
(On all nodes) Enable the current Oracle Cloud Native Environment repository.
sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR7
(On all nodes) Disable all previous repository versions.
sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_olcne15 ol8_olcne16 ol8_olcne17 ol8_developer
Install and Enable Chrony
(On all nodes) If not already available on the system, install and enable the chrony service.
Check if chrony installed.
sudo dnf list --installed chrony
If not installed, install chrony.
sudo dnf -y install chrony sudo systemctl enable --now chronyd
Note: The free lab environment already has the chrony (time) service installed and configured.
Disable Swap
(On all nodes) Disable swap on all the nodes.
sudo swapoff -a sudo sed -i '/swap/ s/^#*/#/' /etc/fstab
Configure the Oracle Linux Firewall
Note: The firewalld service is installed and running by default on Oracle Linux.
(On ocne-operator) Set the firewall rules for the operator node.
sudo firewall-cmd --add-port=8091/tcp --permanent sudo firewall-cmd --reload
(On ocne-control) Set the firewall rules for the control plane node(s).
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent sudo firewall-cmd --add-port=6443/tcp --permanent sudo firewall-cmd --reload
(On ocne-control) Add the following which are used for High-Availability and required to pass validation.
sudo firewall-cmd --add-port=10251/tcp --permanent sudo firewall-cmd --add-port=10252/tcp --permanent sudo firewall-cmd --add-port=2379/tcp --permanent sudo firewall-cmd --add-port=2380/tcp --permanent sudo firewall-cmd --reload
(On ocne-worker) Set the firewall rules for the worker node(s).
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent sudo firewall-cmd --reload
Load the Bridge Filtering Module
(On ocne-control and ocne-worker) Enable and load the module.
sudo modprobe br_netfilter sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
Set up the Operator Node
The operator node performs and manages the deployment of environments, including deploying the Kubernetes cluster. An operator node may be a node in the Kubernetes cluster or a separate host, such as in this tutorial. Install the Oracle Cloud Native Environment Platform CLI, Platform API Server, and utilities on the operator node.
(On ocne-operator) Install the Platform CLI, Platform API Server and utilities.
sudo dnf -y install olcnectl olcne-api-server olcne-utils
(On ocne-operator) Enable the
olcne-api-server
service, but do not start it.sudo systemctl enable olcne-api-server.service
Set up the Kubernetes Nodes
The Kubernetes control plane and worker nodes contain the Oracle Cloud Native Environment Platform Agent and utility packages.
(On ocne-control and ocne-worker) Install the Platform Agent package and utilities.
sudo dnf -y install olcne-agent olcne-utils
(On ocne-control and ocne-worker) Enable the
olcne-agent
(but not start it).sudo systemctl enable olcne-agent.service
The above steps complete the initial setup and software installation for each node.
(Optional) Proxy Server Configuration
If using a Proxy Server, configure it with CRI-O on each Kubernetes node.
Note: This is not required in the free lab environment.
(On all nodes) Create the CRIO Service.
sudo mkdir /etc/systemd/system/crio.service.d
(On all nodes) Open the proxy configuration file with vi and set it to 'insert' mode.
sudo vi /etc/systemd/system/crio.service.d/proxy.conf
(On all nodes) Substitute the appropriate proxy values for those in your environment using the example file below.
[Service] Environment="HTTP_PROXY=proxy.example.com:80" Environment="HTTPS_PROXY=proxy.example.com:80" Environment="NO_PROXY=.example.com,192.0.2.*"
Set up X.509 Private CA Certificates
On the operator-operator node, use the olcne certificates distribute
command to generate and distribute a private CA and certificates for the nodes.
(On ocne-operator) Add the
oracle
user to theolcne
group.sudo usermod -a -G olcne oracle
(On ocne-operator) Log off from the operator node by typing exit, then ssh back onto the operator node again.
(On ocne-operator) Generate and distribute the node certificates.
olcnectl certificates distribute --nodes ocne-operator,ocne-control-01,ocne-worker-01
Example Output:
[oracle@ocne-operator ~]$ olcnectl certificates distribute --nodes ocne-operator,ocne-control-01,ocne-worker-01 INFO[12/02/24 14:59:37] Generating certificate authority INFO[12/02/24 14:59:37] Generating certificate for ocne-operator INFO[12/02/24 14:59:37] Generating certificate for ocne-control-01 INFO[12/02/24 14:59:37] Generating certificate for ocne-worker-01 INFO[12/02/24 14:59:38] Creating directory "/etc/olcne/certificates/" on ocne-operator INFO[12/02/24 14:59:38] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-operator INFO[12/02/24 14:59:38] Copying local file at "certificates/ocne-operator/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-operator INFO[12/02/24 14:59:38] Copying local file at "certificates/ocne-operator/node.key" to "/etc/olcne/certificates/node.key" on ocne-operator INFO[12/02/24 14:59:38] Creating directory "/etc/olcne/certificates/" on ocne-control-01 INFO[12/02/24 14:59:38] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-control-01 INFO[12/02/24 14:59:38] Copying local file at "certificates/ocne-control-01/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-control-01 INFO[12/02/24 14:59:38] Copying local file at "certificates/ocne-control-01/node.key" to "/etc/olcne/certificates/node.key" on ocne-control-01 INFO[12/02/24 14:59:38] Creating directory "/etc/olcne/certificates/" on ocne-worker-01 INFO[12/02/24 14:59:39] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on ocne-worker-01 INFO[12/02/24 14:59:39] Copying local file at "certificates/ocne-worker-01/node.cert" to "/etc/olcne/certificates/node.cert" on ocne-worker-01 INFO[12/02/24 14:59:39] Copying local file at "certificates/ocne-worker-01/node.key" to "/etc/olcne/certificates/node.key" on ocne-worker-01
--nodes
: Provide the fully qualified domain name (FQDN), hostname or IP address of your operator, control and worker nodes.
Set up X.509 Certificates for the ExternalIPs Kubernetes Service
The externalip-validation-webhook-service
Kubernetes service requires X.509 certificates be set up prior to deploying Kubernetes.
(On ocne-operator) Generate the certificates.
olcnectl certificates generate \ --nodes externalip-validation-webhook-service.externalip-validation-system.svc,\ externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ --cert-dir $HOME/certificates/restrict_external_ip/ \ --byo-ca-cert $HOME/certificates/ca/ca.cert \ --byo-ca-key $HOME/certificates/ca/ca.key \ --one-cert
--byo-ca-*
: This option uses the previously created CA certificate and key.
Note: The
$HOME
variable represents the location where this example is executing theolcnectl certificates generate
command. However this can be changed to any loction of your choice using the--cert-dir
option (see documentation for more details).
Bootstrap the Platform API Server
(On ocne-operator) Configure the Platform API Server to use the certificates.
sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component api-server
Example Output:
[oracle@ocne-operator ~]$ sudo /etc/olcne/bootstrap-olcne.sh \ > --secret-manager-type file \ > --olcne-component api-server ● olcne-api-server.service - API server for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-api-server.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-api-server.service.d └─10-auth.conf Active: active (running) since Mon 2024-02-12 13:44:11 GMT; 2s ago Main PID: 53738 (olcne-api-serve) Tasks: 6 (limit: 202967) Memory: 10.1M CGroup: /system.slice/olcne-api-server.service └─53738 /usr/libexec/olcne-api-server -i /etc/olcne/modules --secret-manager-type file Feb 12 13:44:11 ocne-operator systemd[1]: Started API server for Oracle Linux Cloud Native Environments. Feb 12 13:44:11 ocne-operator olcne-api-server[53738]: time=12/02/24 13:44:11 level=info msg=Api server listening on: 8091
(On ocne-operator) Confirm the Platform API Server is running.
sudo systemctl status olcne-api-server.service
Example output:
[oracle@ocne-operator ~]$ sudo systemctl status olcne-api-server.service ● olcne-api-server.service - API server for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-api-server.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-api-server.service.d └─10-auth.conf Active: active (running) since Mon 2024-02-12 13:44:11 GMT; 1min 40s ago Main PID: 53738 (olcne-api-serve) Tasks: 6 (limit: 202967) Memory: 34.0M CGroup: /system.slice/olcne-api-server.service └─53738 /usr/libexec/olcne-api-server -i /etc/olcne/modules --secret-manager-type file Feb 12 13:44:11 ocne-operator systemd[1]: Started API server for Oracle Linux Cloud Native Environments. Feb 12 13:44:11 ocne-operator olcne-api-server[53738]: time=12/02/24 13:44:11 level=info msg=Api server listening on: 8091
(On ocne-operator) Press 'q' to exit the process and continue to the next step.
Bootstrap the Platform Agents
(On ocne-control and ocne-worker) Run the bootstrap script to configure the Platform Agent to use the certificates.
sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component agent
Example Output:
[oracle@ocne-control-01 ~]$ sudo /etc/olcne/bootstrap-olcne.sh \ > --secret-manager-type file \ > --olcne-component agent ● olcne-agent.service - Agent for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-agent.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-agent.service.d └─10-auth.conf Active: active (running) since Mon 2024-02-12 13:51:34 GMT; 2s ago Main PID: 53138 (olcne-agent) Tasks: 7 (limit: 202967) Memory: 6.9M CGroup: /system.slice/olcne-agent.service └─53138 /usr/libexec/olcne-agent --secret-manager-type file Feb 12 13:51:34 ocne-control-01 systemd[1]: Started Agent for Oracle Linux Cloud Native Environments. Feb 12 13:51:34 ocne-control-01 olcne-agent[53138]: time=12/02/24 13:51:34 level=info msg=Started server on[::]:8090
(On ocne-control and ocne-worker) Confirm the Platform API Server is running.
sudo systemctl status olcne-agent.service
Example output:
[oracle@ocne-control-01 ~]$ sudo systemctl status olcne-agent.service ● olcne-agent.service - Agent for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-agent.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-agent.service.d └─10-auth.conf Active: active (running) since Mon 2024-02-12 13:51:34 GMT; 2min 28s ago Main PID: 53138 (olcne-agent) Tasks: 7 (limit: 202967) Memory: 14.8M CGroup: /system.slice/olcne-agent.service └─53138 /usr/libexec/olcne-agent --secret-manager-type file Feb 12 13:51:34 ocne-control-01 systemd[1]: Started Agent for Oracle Linux Cloud Native Environments. Feb 12 13:51:34 ocne-control-01 olcne-agent[53138]: time=12/02/24 13:51:34 level=info msg=Started server on[::]:8090
Create a Platform CLI Configuration File
Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about the environments and modules to create. Using a configuration file saves repeated entries of Platform CLI command options.
During lab deployment, a configuration file is automatically generated and ready to use in the exercise. More information on manually creating a configuration file is in the documentation at Using a Configuration File .
(On ocne-operator) View the configuration file contents.
cat ~/myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ cat ~/myenvironment.yaml environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne control-plane-nodes: - ocne-control-01:8090 worker-nodes: - ocne-worker-01:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /home/oracle/certificates/ca/ca.cert restrict-service-externalip-tls-cert: /home/oracle/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /home/oracle/certificates/restrict_external_ip/node.key
Create the Environment and Kubernetes Module
(On ocne-operator) Create the environment.
cd ~ olcnectl environment create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl environment create --config-file myenvironment.yaml Environment myenvironment created.
(On ocne-operator) Create the Kubernetes module.
olcnectl module create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module create --config-file myenvironment.yaml Modules created successfully.
(On ocne-operator) Validate the Kubernetes module.
olcnectl module validate --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module validate --config-file myenvironment.yaml Validation of module mycluster succeeded.
In this example, there are no validation errors. If there are any errors, the commands required to fix the nodes are provided as output of this command.
(On ocne-operator) Install the Kubernetes module.
olcnectl module install --config-file myenvironment.yaml
The deployment of Kubernetes to the nodes may take several minutes to complete.
Example Output:
[oracle@ocne-operator ~]$ olcnectl module install --config-file myenvironment.yaml Modules installed successfully.
(On ocne-operator) Validate the deployment of the Kubernetes module.
olcnectl module instances --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml INSTANCE MODULE STATE mycluster kubernetes installed ocne-control-01:8090 node installed ocne-worker-01:8090 node installed
Set up kubectl
(On ocne-control) Set up the
kubectl
command.mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
(On ocne-control) Verify
kubectl
works.kubectl get nodes
Example Output:
[oracle@ocne-control ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control-01 Ready control-plane 3m31s v1.28.3+3.el8 ocne-worker-01 Ready <none> 2m37s v1.28.3+3.el8