Use Operators on Oracle Cloud Native Environment
Introduction
A Kubernetes operator is a design pattern for writing code to automate tasks and extend Kubernetes. An operator is a set of concepts you can use to define a service for Kubernetes and helps to automate administrative services in Kubernetes.
The Operator Lifecycle Manager module installs an instance of Operator Lifecycle Manager into a Kubernetes cluster to manage the installation and lifecycle of the operators. The Operator Lifecycle Manager references a public upstream operator registry (https://operatorhub.io ) to share Operators.
This tutorial shows how to install the Operator Lifecycle Manager module into your Oracle Cloud Native Environment cluster, followed by an example showing how to install and use an Operator.
Objectives
In this lab, you will learn:
- To install the
operator-lifecycle-manager
module - Verify the installation of the Operator Manager
- Install additional Operators
Prerequisites
Minimum of a 3-node Oracle Cloud Native Environment cluster:
- Operator node
- Kubernetes control plane node
- Kubernetes worker node
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
Open a terminal and connect via SSH to the ocne-operator node.
ssh oracle@<ip_address_of_node>
Set up the
kubectl
command on the operator node.mkdir -p $HOME/.kube; \ ssh ocne-control-01 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config; \ sudo chown $(id -u):$(id -g) $HOME/.kube/config; \ export KUBECONFIG=$HOME/.kube/config; \ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a
Ready
state along with their current Kubernetes version.
Deploying the Operator Lifecycle Manager Module
The first task covers the installation of the Operator Lifecycle Manager module in Oracle Cloud Native Environment.
Create the module.
olcnectl module create \ --environment-name myenvironment \ --module operator-lifecycle-manager \ --name myolm \ --olm-kubernetes-module mycluster
Install the module.
olcnectl module install \ --environment-name myenvironment \ --name myolm
Note: The installation takes a few minutes to complete.
Verify the module deployment.
olcnectl module instances \ --environment-name myenvironment
Example Output:
INSTANCE MODULE STATE ocne-worker-02:8090 node installed mycluster kubernetes installed myolm operator-lifecycle-manager installed ocne-control-01:8090 node installed ocne-worker-01:8090 node installed
Confirm the Operator Lifecycle Manager installed
Verify the installation is completed successfully before using the Operator Lifecycle Manager.
List the operator registries.
kubectl get catalogsource --namespace operator-lifecycle-manager
The output shows the default operator registry of OperatorHub.io.
Get a list of installable operators.
kubectl get packagemanifest
Example Output:
NAME CATALOG AGE kubemq-operator Community Operators 2m44s hazelcast-platform-operator Community Operators 2m44s skupper-operator Community Operators 2m44s ...
Install an Operator
You just saw a list of the operators available on OperatorHub. These are all available to be installed by the Operator Lifecycle Manager. The following example shows you how to create an available operator from the OperatorHub . This example uses the trivy-operator from the OperatorHub and follows the steps provided in the documentation .
Create a namespace to install the operator into.
kubectl create namespace trivy-system
Create a network policy allowing traffic on port 53 to the kube-system namespace.
This network policy ensures DNS lookups via the
core-dns
pods and trivy-system work.cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-egress-allow-kube-system-dns namespace: trivy-system spec: egress: - ports: - port: 53 protocol: TCP - port: 53 protocol: UDP to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: {} policyTypes: - Egress EOF
Create a network policy allowing port 443 traffic to download the vulnerability database.
cat << EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-egress-443-trivy-operator namespace: trivy-system spec: egress: - ports: - port: 443 protocol: TCP to: - ipBlock: cidr: 0.0.0.0/0 podSelector: matchLabels: app.kubernetes.io/managed-by: trivy-operator policyTypes: - Egress EOF
Create an OperatorGroup .
cat << EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: trivy-operator-group namespace: trivy-system EOF
Create a Subscription and install the operator.
cat << EOF | kubectl apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: trivy-operator-subscription namespace: trivy-system spec: channel: alpha name: trivy-operator source: operatorhubio-catalog sourceNamespace: operator-lifecycle-manager installPlanApproval: Automatic config: env: - name: OPERATOR_EXCLUDE_NAMESPACES value: "kube-system" EOF
This Subscription will install the operator in the
trivy-system
namespace and scan all namespaces except thekube-system
andtrivy-system
namespaces.Confirm the operator is functioning.
watch kubectl get clusterserviceversions -n trivy-system
Example Output:
NAME DISPLAY VERSION REPLACES PHASE trivy-operator.v0.21.3 Trivy Operator 0.21.3 trivy-operator.v0.21.2 Succeeded
The installation takes a few minutes to complete. Once the operator confirms that the installation succeeded, press Ctrl-C to exit.
Confirm the operator deployed.
kubectl get deployments -n trivy-system
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE trivy-operator 1/1 1 1 3m15s
Scan a Deployment
Next, you will deploy an nginx
version with known vulnerabilities. When nginx
is deployed, the trivy-operator
scans the nginx
image for vulnerabilities. The operator saves the results as a VulnerabilityReport (which is named after the ReplicaSet that it scanned).
Deploy a known NGinx version with vulnerabilities.
kubectl create deployment nginx --image nginx:1.16
Wait for the vulnerability report to generate.
watch kubectl get vulnerabilityreports -o wide
Example Output:
NAME REPOSITORY TAG SCANNER AGE CRITICAL HIGH MEDIUM LOW UNKNOWN replicaset-nginx-85bfcd86d5-nginx library/nginx 1.16 Trivy 8m52s 40 133 158 128 9
The vulnerability report is generated in 4 to 6 minutes. Once the output looks similar to the example shown, press Ctrl-C to exit.
Note: Take note/copy of the ReplicaSet's name to use in the next step.
Output the results of the vulnerability report.
kubectl get vulnerabilityreport $(kubectl get vulnerabilityreports -o wide | awk 'NR==2{print $1}') -o json
Note: The
awk
command prints the second row of output from the command within$()
and then passes that to thekubectl get vulnerabilityreports
. If you prefer the output in YAML format, replace the-o json
with-o yaml
.Example Output:
The Report is very long. Here is a small excerpt of the output.
{ "apiVersion": "aquasecurity.github.io/v1alpha1", "kind": "VulnerabilityReport", "metadata": { "annotations": { "trivy-operator.aquasecurity.github.io/report-ttl": "24h0m0s" }, "creationTimestamp": "2024-06-19T16:11:03Z", "generation": 20, "labels": { "resource-spec-hash": "8584c9dcb6", "trivy-operator.container.name": "nginx", "trivy-operator.resource.kind": "ReplicaSet", "trivy-operator.resource.name": "nginx-85bfcd86d5", "trivy-operator.resource.namespace": "default" }, "name": "replicaset-nginx-85bfcd86d5-nginx", "namespace": "default", "ownerReferences": [ { "apiVersion": "apps/v1", "blockOwnerDeletion": false, "controller": true, "kind": "ReplicaSet", "name": "nginx-85bfcd86d5", "uid": "db87b3e6-0923-40d5-9538-bdad9cb7f4c1" } ], "resourceVersion": "39332", "uid": "731a5164-ace6-4c43-a8bd-be87f6623c0f" }, "report": { "artifact": { "repository": "library/nginx", "tag": "1.16" }, "os": { "family": "debian", "name": "10.3" }, "registry": { "server": "index.docker.io" }, "scanner": { "name": "Trivy", "vendor": "Aqua Security", "version": "0.52.0" }, "summary": { "criticalCount": 40, "highCount": 133, "lowCount": 128, "mediumCount": 158, "noneCount": 0, "unknownCount": 9 }, "updateTimestamp": "2024-06-19T16:16:35Z", "vulnerabilities": [ { "fixedVersion": "1.8.2.2", "installedVersion": "1.8.2", "lastModifiedDate": "2022-10-29T02:41:36Z", "links": [], "primaryLink": "https://avd.aquasec.com/nvd/cve-2020-27350", "publishedDate": "2020-12-10T04:15:11Z", "resource": "apt", "score": 5.7, "severity": "MEDIUM", "target": "", "title": "apt: integer overflows and underflows while parsing .deb packages", "vulnerabilityID": "CVE-2020-27350" }, ... ... { "fixedVersion": "", "installedVersion": "1:1.2.11.dfsg-1", "lastModifiedDate": "2024-01-24T21:15:08Z", "links": [], "primaryLink": "https://avd.aquasec.com/nvd/cve-2023-45853", "publishedDate": "2023-10-14T02:15:09Z", "resource": "zlib1g", "score": 9.8, "severity": "CRITICAL", "target": "", "title": "zlib: integer overflow and resultant heap-based buffer overflow in zipOpenNewFileInZip4_6", "vulnerabilityID": "CVE-2023-45853" }, { "fixedVersion": "1:1.2.11.dfsg-1+deb10u1", "installedVersion": "1:1.2.11.dfsg-1", "lastModifiedDate": "2023-11-07T02:56:26Z", "links": [], "primaryLink": "https://avd.aquasec.com/nvd/cve-2018-25032", "publishedDate": "2022-03-25T09:15:08Z", "resource": "zlib1g", "score": 7.5, "severity": "HIGH", "target": "", "title": "zlib: A flaw found in zlib when compressing (not decompressing) certain inputs", "vulnerabilityID": "CVE-2018-25032" } ] } }
Review more default compliance reports.
The Trivy operator can also generate other compliance reports . A couple available by default are the
SbomReport
- Lists the Software Bill of Materials found in the container.ExposedSecretReport
- reports any secrets found in the container.
kubectl get SbomReport -o wide
Example Output:
NAME REPOSITORY TAG SCANNER AGE COMPONENTS DEPENDENCIES replicaset-nginx-85bfcd86d5-nginx library/nginx 1.16 Trivy 47m 120 120
kubectl get ExposedSecretReport -o wide
Example Output:
NAME REPOSITORY TAG SCANNER AGE CRITICAL HIGH MEDIUM LOW replicaset-nginx-85bfcd86d5-nginx library/nginx 1.16 Trivy 47m 0 0 0 0
(Optional) Uninstall the Operator
You can uninstall the Trivy operator by deleting the Subscription, the ClusterServiceVersion, the OperatorGroup, and the Namespace. The steps are version-dependent and described in the documentation .
Delete the Subscription.
kubectl delete subscription trivy-operator-subscription -n trivy-system
Delete the ClusterServiceVersion.
kubectl delete clusterserviceversion trivy-operator.v0.21.3 -n trivy-system
Delete the Operatorgroup.
kubectl delete operatorgroup trivy-operator-group -n trivy-system
Delete the Namespace.
kubectl delete ns trivy-system
Finally, you can manually delete the Custom Resource Definitions (CRDs) created by the Trivy operator. However, doing this also deletes all of the generated security reports. You can get a list of the CRDs for Trivy by running kubectl get crds | grep aquasecurity
.
Summary
Now that you have successfully completed this tutorial, you should have the knowledge and confidence to deploy and use the Operator Lifecycle Manager to attempt to install other Operators.