Deploy Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for developing and managing cloud-native applications. The Kubernetes module is the core module. It deploys and manages containers and automatically installs and configures CRI-O, runC, and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster, which may be either runC or Kata Containers.
Objectives
This tutorial demonstrates how to:
- Install Oracle Cloud Native Environment
- Configure x.509 Private CA Certificates
- Configure Oracle Cloud Native Environment
- Verify the install completes successfully
Prerequisites
Minimum of 3 Oracle Linux instances
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e ocne_type=none
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Update Oracle Linux
Open a terminal and connect via SSH to the ocne-operator node.
ssh oracle@<ip_address_of_node>
Update Oracle Linux and reboot all the nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ "sudo dnf -y update; \ sudo reboot" done
Depending on the number of packages to upgrade, this step may take a while to complete.
Reconnect to the ocne-operator via ssh after the reboot.
ssh oracle@<ip_address_of_ol_node>
Install and Enable the Oracle Cloud Native Environment Yum Repository
Install the Yum repository, enable the current Oracle Cloud Native Environment repository, and disable the previous versions on all the nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ "sudo dnf -y install oracle-olcne-release-el8; \ sudo dnf config-manager --enable ol8_olcne18 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR7; \ sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_olcne15 ol8_olcne16 ol8_olcne17 ol8_developer" done
Install and Enable Chrony
Install and enable the chrony service on all the nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ "sudo dnf -y install chrony; \ sudo systemctl enable --now chronyd" done
If the
chrony
package already exists on the system, thendnf
reports that there's_Nothing to do._
.
Disable Swap
Disable swap on all the nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ "sudo swapoff -a; \ sudo sed -i '/swap/ s/^#*/#/' /etc/fstab" done
Configure the Oracle Linux Firewall
The firewalld service is installed and running by default on Oracle Linux.
Set the firewall rules for the operator node.
sudo firewall-cmd --add-port=8091/tcp --permanent sudo firewall-cmd --reload
Set the firewall rules for the control plane node(s).
ssh ocne-control-01 \ "sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent; \ sudo firewall-cmd --add-port=8090/tcp --permanent; \ sudo firewall-cmd --add-port=10250/tcp --permanent; \ sudo firewall-cmd --add-port=10255/tcp --permanent; \ sudo firewall-cmd --add-port=8472/udp --permanent; \ sudo firewall-cmd --add-port=6443/tcp --permanent; \ sudo firewall-cmd --reload"
Add the following to the control plane node(s) to ensure high availability and pass validation tests.
ssh ocne-control-01 \ "sudo firewall-cmd --add-port=10251/tcp --permanent; \ sudo firewall-cmd --add-port=10252/tcp --permanent; \ sudo firewall-cmd --add-port=2379/tcp --permanent; \ sudo firewall-cmd --add-port=2380/tcp --permanent; \ sudo firewall-cmd --reload"
Set the firewall rules for the worker node(s).
for host in ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent; \ sudo firewall-cmd --add-port=8090/tcp --permanent; \ sudo firewall-cmd --add-port=10250/tcp --permanent; \ sudo firewall-cmd --add-port=10255/tcp --permanent; \ sudo firewall-cmd --add-port=8472/udp --permanent; \ sudo firewall-cmd --reload" done
Load the Bridge Filtering Module
Enable and load the module on the control plane and worker nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo modprobe br_netfilter; \ sudo sh -c 'echo "\""br_netfilter"\"" > /etc/modules-load.d/br_netfilter.conf'" done
Set up the Operator Node
The operator node performs and manages environment deployments, including the Kubernetes cluster. It may be a node in the Kubernetes cluster or a separate host, as in this tutorial. Install the Oracle Cloud Native Environment Platform CLI, Platform API Server, and utilities on the operator node.
Install the Platform CLI, Platform API Server, and utilities.
sudo dnf -y install olcnectl olcne-api-server olcne-utils
Enable the API server service, but do not start it.
sudo systemctl enable olcne-api-server.service
Set up the Kubernetes Nodes
The Kubernetes control plane and worker nodes contain the Oracle Cloud Native Environment Platform Agent and utility packages.
Install the Platform Agent package and utilities.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo dnf -y install olcne-agent olcne-utils" done
Enable the agent service, but do not start it.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo systemctl enable olcne-agent.service" done
These initial steps complete the setup and software installation for each node.
(Optional) Proxy Server Configuration
If using a Proxy Server, configure it with CRI-O on each Kubernetes node.
Note: This is not required in the free lab environment.
Create the CRIO Service.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ "sudo mkdir /etc/systemd/system/crio.service.d" done
Create the proxy configuration file and substitute the appropriate proxy values for those in your environment.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 ocne-operator do printf "======= $host =======\n\n" ssh $host \ cat <<-'MOD' | sudo tee /etc/systemd/system/crio.service.d/proxy.conf > /dev/null [Service] Environment="HTTP_PROXY=proxy.example.com:80" Environment="HTTPS_PROXY=proxy.example.com:80" Environment="NO_PROXY=.example.com,192.0.2.*" MOD done
This script has no indention as the
EOF
token must be at the beginning of the line for a heredoc. The<<-
allows for the use ofTab
characters to indent, but when using copy and paste, the characters convert to spaces.
Set up X.509 Private CA Certificates
Use the olcne certificates distribute
command to generate and distribute private CA and certificates for the nodes.
Add the
oracle
user to theolcne
group.sudo usermod -a -G olcne oracle
Log off the ocne-operator node and then connect again using SSH.
exit
ssh oracle@<ip_address_of_node>
Generate and distribute the node certificates.
olcnectl certificates distribute --nodes ocne-operator,ocne-control-01,ocne-worker-01,ocne-worker-02
--nodes
: Provide the fully qualified domain name (FQDN), hostname, or IP address of your operator, control plane, and worker nodes.
Set up X.509 Certificates for the ExternalIPs Kubernetes Service
The externalip-validation-webhook-service
Kubernetes service requires setting up X.509 certificates before deploying Kubernetes.
Generate the certificates.
olcnectl certificates generate \ --nodes externalip-validation-webhook-service.externalip-validation-system.svc,\ externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ --cert-dir $HOME/certificates/restrict_external_ip/ \ --byo-ca-cert $HOME/certificates/ca/ca.cert \ --byo-ca-key $HOME/certificates/ca/ca.key \ --one-cert
--byo-ca-*
: This option uses the previously created CA certificate and key.
Note: The
$HOME
variable represents the location where this example executes theolcnectl certificates generate
command. However, you can change this to any location using the--cert-dir
option (see the documentation for more details).
Bootstrap the Platform API Server
Configure the Platform API Server to use the certificates on the operator node.
sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component api-server
Confirm the Platform API Server is running.
sudo systemctl status olcne-api-server.service
If the check status command does not automatically exit, type
q
to exit.
Bootstrap the Platform Agents
Run the bootstrap script to configure the Platform Agent to use the certificates on the control plane and worker nodes.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-component agent" done
Confirm the Platform API Server is running.
for host in ocne-control-01 ocne-worker-01 ocne-worker-02 do printf "======= $host =======\n\n" ssh $host \ "sudo systemctl status olcne-agent.service" done
Create a Platform CLI Configuration File
Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about the environments and modules to create. Using a configuration file saves repeated entries of Platform CLI command options.
More information on creating a configuration file is in the documentation at Using a Configuration File .
Create a configuration file.
cat << EOF | tee ~/myenvironment.yaml > /dev/null environments: - environment-name: myenvironment globals: api-server: 127.0.0.1:8091 secret-manager-type: file olcne-ca-path: /etc/olcne/certificates/ca.cert olcne-node-cert-path: /etc/olcne/certificates/node.cert olcne-node-key-path: /etc/olcne/certificates/node.key modules: - module: kubernetes name: mycluster args: container-registry: container-registry.oracle.com/olcne control-plane-nodes: - ocne-control-01:8090 worker-nodes: - ocne-worker-01:8090 - ocne-worker-02:8090 selinux: enforcing restrict-service-externalip: true restrict-service-externalip-ca-cert: /home/oracle/certificates/ca/ca.cert restrict-service-externalip-tls-cert: /home/oracle/certificates/restrict_external_ip/node.cert restrict-service-externalip-tls-key: /home/oracle/certificates/restrict_external_ip/node.key EOF
Create the Environment and Kubernetes Module
Create the environment.
cd ~ olcnectl environment create --config-file myenvironment.yaml
Create the Kubernetes module.
olcnectl module create --config-file myenvironment.yaml
Validate the Kubernetes module.
olcnectl module validate --config-file myenvironment.yaml
In this example, there are no validation errors. If there are any errors, the output of this command provides the commands required to fix the nodes.
Install the Kubernetes module.
olcnectl module install --config-file myenvironment.yaml
The deployment of Kubernetes to the nodes may take several minutes to complete.
Validate the deployment of the Kubernetes module.
olcnectl module instances --config-file myenvironment.yaml
Example Output:
INSTANCE MODULE STATE mycluster kubernetes installed ocne-control-01:8090 node installed ocne-worker-01:8090 node installed ocne-worker-02:8090 node installed
Set up kubectl
Set up the
kubectl
command on the control plane node.ssh ocne-control-01 "mkdir -p $HOME/.kube; \ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; \ sudo chown $(id -u):$(id -g) $HOME/.kube/config; \ export KUBECONFIG=$HOME/.kube/config; \ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc"
Verify that
kubectl
works.ssh ocne-control-01 "kubectl get nodes"
The output shows that each node in the cluster is ready, along with its current role and version.