Deploy Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for the development and management of cloud native applications. The Kubernetes module is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers.
Objectives
This lab demonstrates how to:
- Install Oracle Cloud Native Environment Release 1.5 on a 3-node cluster
- Configure x.509 Private CA Certificates
- Configure Oracle Cloud Native Environment on a 3-node cluster
- Verify the install completed successfully
Prerequisites
The host systems to perform the steps in this tutorial are listed in this section. To be successful requires:
3 Oracle Linux systems to use as:
- Operator node (ocne-operator)
- Kubernetes control plane node (ocne-control)
- Kubernetes worker node (ocne-worker)
Each system should have a minimum of the following installed:
- Latest Oracle Linux 8 (x86_64) installed and running the Unbreakable Enterprise Kernel Release 6 (UEK R6)
This environment is pre-configured with the following:
- Created an Oracle user account (used during the install)
- Granted the Oracle account 'sudo' access
- Set up key-based SSH, also known as passwordless SSH, between the instances
Set Up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
This lab involves multiple systems, each of which require different steps performed. It is recommended to start by opening three terminal windows and connecting to the ocne-operator, ocne-control and ocne-worker nodes. This avoids the need to repeatedly log in and out.
Open a terminal and connect via ssh to each of the three nodes.
ssh oracle@<ip_address_of_ol_node>
Note When a step says "(On all nodes)" in the lab, perform those actions on ocne-operator, ocne-control and ocne-worker. The reason for this approach is to avoid repetition, because the required action will be identical on each node.
(Optional) Update Oracle Linux
(On all nodes) Make sure Oracle Linux is up to date.
sudo dnf -y update
This may take a few minutes to complete; it may save time if this step is executed in parallel on each node separately.
(On all nodes) Reboot and reconnect.
sudo reboot
(On all nodes) After the system reboots, reconnect to the node(s) via ssh.
ssh oracle@<ip_address_of_ol_node>
Install and Enable the Oracle Cloud Native Environment Yum Repository
(On all nodes) Install the yum repository.
sudo dnf -y install oracle-olcne-release-el8
(On all nodes) Enable the current Oracle Cloud Native Environment repository.
sudo dnf config-manager --enable ol8_olcne15 ol8_addons ol8_baseos_latest ol8_appstream ol8_UEKR6
(On all nodes) Disable all previous repository versions.
sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_developer
Install and Enable Chrony
(On all nodes) If not already available on the system, install and enable the chrony service.
Check if chrony installed.
sudo dnf list --installed chrony
If not installed, install chrony.
sudo dnf -y install chrony sudo systemctl enable --now chronyd
Note: The free lab environment already has the chrony (time) service installed and configured.
Disable Swap
(On all nodes) Disable swap on all the nodes.
sudo swapoff -a sudo sed -i '/swap/ s/^#*/#/' /etc/fstab
Configure the Oracle Linux Firewall
Note: The firewalld service is installed and running by default on Oracle Linux.
(On ocne-operator) Set the firewall rules for the operator node.
sudo firewall-cmd --add-port=8091/tcp --permanent sudo firewall-cmd --reload
(On ocne-control) Set the firewall rules for the control plane node(s).
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent sudo firewall-cmd --add-port=6443/tcp --permanent sudo firewall-cmd --reload
(On ocne-control) Add the following which are used for High-Availability and required to pass validation.
sudo firewall-cmd --add-port=10251/tcp --permanent sudo firewall-cmd --add-port=10252/tcp --permanent sudo firewall-cmd --add-port=2379/tcp --permanent sudo firewall-cmd --add-port=2380/tcp --permanent sudo firewall-cmd --reload
(On ocne-worker) Set the firewall rules for the worker node(s).
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent sudo firewall-cmd --add-port=8090/tcp --permanent sudo firewall-cmd --add-port=10250/tcp --permanent sudo firewall-cmd --add-port=10255/tcp --permanent sudo firewall-cmd --add-port=8472/udp --permanent sudo firewall-cmd --reload
Load the Bridge Filtering Module
(On ocne-control and ocne-worker) Enable and load the module.
sudo modprobe br_netfilter sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
Set up the Operator Node
The operator node performs and manages the deployment of environments, including deploying the Kubernetes cluster. An operator node may be a node in the Kubernetes cluster or a separate host, such as in this tutorial. Install the Oracle Cloud Native Environment Platform CLI, Platform API Server, and utilities on the operator node.
(On ocne-operator) Install the Platform CLI, Platform API Server and utilities.
sudo dnf -y install olcnectl olcne-api-server olcne-utils
(On ocne-operator) Enable the
olcne-api-server
service, but do not start it.sudo systemctl enable olcne-api-server.service
Set up the Kubernetes Nodes
The Kubernetes control plane and worker nodes contain the Oracle Cloud Native Environment Platform Agent and utility packages.
(On ocne-control and ocne-worker) Install the Platform Agent package and utilities.
sudo dnf -y install olcne-agent olcne-utils
(On ocne-control and ocne-worker) Enable the
olcne-agent
(but not start it).sudo systemctl enable olcne-agent.service
The above steps complete the initial setup and software installation for each node.
(Optional) Proxy Server Configuration
If using a Proxy Server, configure it with CRI-O on each Kubernetes node.
Note: This is not required in the free lab environment.
(On all nodes) Create the CRIO Service.
sudo mkdir /etc/systemd/system/crio.service.d
(On all nodes) Open the proxy configuration file with vi and set it to 'insert' mode.
sudo vi /etc/systemd/system/crio.service.d/proxy.conf
(On all nodes) Substitute the appropriate proxy values for those in your environment using the example file below.
[Service] Environment="HTTP_PROXY=proxy.example.com:80" Environment="HTTPS_PROXY=proxy.example.com:80" Environment="NO_PROXY=.example.com,192.0.2.*"
Set up X.509 Private CA Certificates
Use the provided /etc/olcne/gen-certs-helper.sh
script to generate a private CA and certificates for the nodes. Run the script from the /etc/olcne
directory on the operator node, saving the certificate files in the current directory.
(On ocne-operator Create the X.509 Certificates.
cd /etc/olcne sudo ./gen-certs-helper.sh \ --cert-request-organization-unit "My Company Unit" \ --cert-request-organization "My Company" \ --cert-request-locality "My Town" \ --cert-request-state "My State" \ --cert-request-country US \ --cert-request-common-name pub.linuxvirt.oraclecvn.com \ --nodes ocne-worker.pub.linuxvirt.oraclevcn.com,ocne-control.pub.linuxvirt.oraclevcn.com,ocne-operator.pub.linuxvirt.oraclevcn.com
Provide the private CA information using the
--cert-request*
options. Some of these options exist in the example. Run thegen-certs-helper.sh --help
command to get a complete list of options.--cert-request-common-name
: Provide the appropriate Domain Name System (DNS) Domain Name for your environment.--nodes
: Provide the fully qualified domain name (FQDN) of your operator, control plane, and worker nodes.
Transfer the X.509 Private CA Certificates
After generating the certificates, copy them to each of the nodes.
(On ocne-operator) Update the user details in the provided transfer script.
sudo sed -i 's/USER=opc/USER=oracle/g' configs/certificates/olcne-tranfer-certs.sh
The tutorial requires this step because the script's default user is
opc
. Since installing the product using the useroracle
, update theUSER
variable within the script accordingly.(On the ocne-operator) Set the permissions for each node.key generated by the certificate creation script.
sudo chmod 644 /etc/olcne/configs/certificates/tmp-olcne/ocne-control.pub.linuxvirt.oraclevcn.com/node.key sudo chmod 644 /etc/olcne/configs/certificates/tmp-olcne/ocne-operator.pub.linuxvirt.oraclevcn.com/node.key sudo chmod 644 /etc/olcne/configs/certificates/tmp-olcne/ocne-worker.pub.linuxvirt.oraclevcn.com/node.key
(On ocne-operator) Transfer the scripts to each node.
This step requires passwordless SSH configured between the nodes. Configuration of this is outside the scope of this tutorial but is pre-configured in the free lab environment.
bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh
(On all nodes) Verify the files copied correctly.
sudo -u olcne ls /etc/olcne/configs/certificates/production
Example Output:
[oracle@ocne-control ~]$ sudo -u olcne ls /etc/olcne/configs/certificates/production ca.cert node.cert node.key
Set up X.509 Certificates for the externalIPs Kubernetes Service
The externalip-validation-webhook-service
Kubernetes service requires X.509 certificates be set up prior to deploying Kubernetes.
(On ocne-operator) Generate the certificates.
cd /etc/olcne sudo ./gen-certs-helper.sh \ --cert-dir /etc/olcne/configs/certificates/restrict_external_ip/ \ --cert-request-organization-unit "My Company Unit" \ --cert-request-organization "My Company" \ --cert-request-locality "My Town" \ --cert-request-state "My State" \ --cert-request-country US \ --cert-request-common-name cloud.example.com \ --nodes externalip-validation-webhook-service.externalip-validation-system.svc,\ externalip-validation-webhook-service.externalip-validation-system.svc.cluster.local \ --one-cert \ --byo-ca-cert /etc/olcne/configs/certificates/production/ca.cert \ --byo-ca-key /etc/olcne/configs/certificates/production/ca.key
--byo-ca-*
: This option uses the previously created CA certificate and key.
(On ocne-operator) Set the permissions for the directory containing the node.key generated by the certificate creation script.
sudo chown -R oracle:oracle /etc/olcne/configs/certificates/restrict_external_ip/
Bootstrap the Platform API Server
(On ocne-operator) Run the bootstrap script to configure the Platform API Server to use the certificates.
sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ --olcne-component api-server
Example Output:
[oracle@ocne-operator olcne]$ sudo /etc/olcne/bootstrap-olcne.sh \ > --secret-manager-type file \ > --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ > --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ > --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ > --olcne-component api-server * olcne-api-server.service - API server for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-api-server.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-api-server.service.d `-10-auth.conf Active: active (running) since Wed 2022-05-11 13:35:19 GMT; 2s ago Main PID: 96132 (olcne-api-serve) Tasks: 7 (limit: 203120) Memory: 12.2M CGroup: /system.slice/olcne-api-server.service `-96132 /usr/libexec/olcne-api-server -i /etc/olcne/modules --secret-manager-type file --olcne-ca-path /etc/olcne... May 11 13:35:19 ocne-operator systemd[1]: Started API server for Oracle Linux Cloud Native Environments. May 11 13:35:19 ocne-operator olcne-api-server[96132]: time=11/05/22 13:35:19 level=info msg=Api server listening on: 8091
- Alternatively, you can use certificates managed by HashiCorp Vault. Using that method is not included in this tutorial.
(On ocne-operator) Confirm the Platform API Server is running.
sudo systemctl status olcne-api-server
Example output:
[oracle@ocne-operator olcne]$ sudo systemctl status olcne-api-server * olcne-api-server.service - API server for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-api-server.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-api-server.service.d `-10-auth.conf Active: active (running) since Wed 2022-05-11 10:44:30 GMT; 9min ago Main PID: 59600 (olcne-api-serve) Tasks: 7 (limit: 203120) Memory: 12.6M CGroup: /system.slice/olcne-api-server.service `-59600 /usr/libexec/olcne-api-server -i /etc/olcne/modules --secret-manager-type file --olcne-ca-path /etc/olcne/c> May 11 10:44:30 ocne-operator systemd[1]: Started API server for Oracle Linux Cloud Native Environments. May 11 10:44:30 ocne-operator olcne-api-server[59600]: time=11/05/22 10:44:30 level=info msg=Api server listening on: 8091 ...
(On ocne-operator) Press 'q' to exit the process and continue to the next step.
Bootstrap the Platform Agents
(On ocne-control and ocne-worker) Run the bootstrap script to configure the Platform Agent to use the certificates.
sudo /etc/olcne/bootstrap-olcne.sh \ --secret-manager-type file \ --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ --olcne-component agent
Example Output:
[oracle@ocne-worker ~]$ sudo /etc/olcne/bootstrap-olcne.sh \ > --secret-manager-type file \ > --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \ > --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \ > --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \ > --olcne-component agent * olcne-agent.service - Agent for Oracle Linux Cloud Native Environments Loaded: loaded (/usr/lib/systemd/system/olcne-agent.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/olcne-agent.service.d `-10-auth.conf Active: active (running) since Wed 2022-05-11 11:13:58 GMT; 2s ago Main PID: 66500 (olcne-agent) Tasks: 8 (limit: 203120) Memory: 7.1M CGroup: /system.slice/olcne-agent.service `-66500 /usr/libexec/olcne-agent --secret-manager-type file --olcne-ca-path /etc/olcne/configs/certificates/produc... May 11 11:13:58 ocne-control systemd[1]: Started Agent for Oracle Linux Cloud Native Environments. May 11 11:13:58 ocne-control olcne-agent[66500]: time=11/05/22 11:13:58 level=info msg=Started server on[::]:8090
Create a Platform CLI Configuration File
Administrators can use a configuration file to simplify creating and managing environments and modules. The configuration file, written in valid YAML syntax, includes all information about the environments and modules to create. Using a configuration file saves repeated entries of Platform CLI command options.
During lab deployment, a configuration file is automatically generated and ready to use in the exercise. More information on manually creating a configuration file is in the documentation at Using a Configuration File .
(On ocne-operator) View the configuration file contents.
cat ~/myenvironment.yaml
Create the Environment and Kubernetes Module
(On ocne-operator) Create the environment.
cd ~ olcnectl environment create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl environment create --config-file myenvironment.yaml Environment myenvironment created.
(On ocne-operator) Create the Kubernetes module.
olcnectl module create --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module create --config-file myenvironment.yaml Modules created successfully.
(On ocne-operator) Validate the Kubernetes module.
olcnectl module validate --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module validate --config-file myenvironment.yaml Validation of module mycluster succeeded.
In this example, there are no validation errors. If there are any errors, the commands required to fix the nodes are provided as output of this command.
(On ocne-operator) Install the Kubernetes module.
olcnectl module install --config-file myenvironment.yaml
The deployment of Kubernetes to the nodes may take several minutes to complete.
Example Output:
[oracle@ocne-operator ~]$ olcnectl module install --config-file myenvironment.yaml Modules installed successfully.
(On ocne-operator) Validate the deployment of the Kubernetes module.
olcnectl module instances --config-file myenvironment.yaml
Example Output:
[oracle@ocne-operator ~]$ olcnectl module instances --config-file myenvironment.yaml INSTANCE MODULE STATE mycluster kubernetes installed ocne-control.pub.linuxvirt.oraclevcn.com:8090 node installed ocne-worker.pub.linuxvirt.oraclevcn.com:8090 node installed
Set up kubectl
(On ocne-control) Set up the
kubectl
command.mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
(On ocne-control) Verify
kubectl
works.kubectl get nodes
Example Output:
[oracle@ocne-control ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-control Ready control-plane,master 10m v1.22.8+1.el8 ocne-worker Ready <none> 10m v1.22.8+1.el8