Install Verrazzano on Oracle Cloud Native Environment
Introduction
Oracle Cloud Native Environment is a fully integrated suite for the development and management of cloud native applications.
Verrazzano Enterprise Container Platform is an end-to-end enterprise container platform for deploying cloud native and traditional applications in multicloud and hybrid environments. It is made up of a curated set of open source components, many that are already in common use, and some that are written specifically to pull together all of the pieces that make Verrazzano a cohesive and easy to use platform.
Objectives
This tutorial/lab demonstrates how to:
- Install the Verrazzano platform operator.
- Install Verrazzano using the development (
dev
) profile. For more information on profiles see the Verrazzano documentation . - Verify the install.
Note: This tutorial/lab does not demonstrate the execution of any example applications.
Prerequisites
An Oracle Linux 8 or later system with the following configuration:
- A non-root user with
sudo
privileges. - Oracle Cloud Native Environment installed and configured.
- A non-root user with
Note: In a production environment, it is recommended that you have a cluster with at least three control plane nodes and at least five worker nodes.
Oracle Support Disclaimer
Oracle does not provide technical support for the sequence of steps in the following instructions because these steps refer to a deployment topology that is NOT intended for use in a production environment. This tutorial provides optional instructions as a convenience only to help facilitate developers testing services locally during development.
Oracle’s supported method for the development and management of cloud native applications is a production deployment of Oracle Cloud Native Environment. For more information, see the Oracle Cloud Native Environment documentation .
Set Up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 10-15 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
This lab uses a single node and it is recommended to start by opening a terminal window and connect to the node. This avoids you having to repeatedly log in and out. In a multi-node installation of Oracle Cloud Native Environment, the kubectl
commands are run on a control plane node, or another system configured for kubectl
.
Open a terminal and connect via ssh to the ocne-node01 system.
ssh oracle@<ip_address_of_ol_node>
Confirm the Oracle Cloud Infrastructure Cloud Controller Manager Module is Ready
Before proceeding it is important to wait for the Oracle Cloud Infrastructure Cloud Controller Manager module to establish communication with the OCI API. The Oracle Cloud Infrastructure Cloud Controller Manager module runs a pod on each node that handles functionality such as attaching the block storage. After being installed, this controller prevents any pods from being scheduled until this dedicated pod confirms it is initialized, running and communicating with the OCI API. Until this communication has been successfully established, any attempt to proceed is likely to prevent successful installation of Verrazzano.
Retrieve the status of the component
oci-cloud-controller-manager
pods.kubectl -n kube-system get pods -l component=oci-cloud-controller-manager
Example Output:
[oracle@ocne-node01 ~]$ kubectl -n kube-system get pods -l component=oci-cloud-controller-manager NAME READY STATUS RESTARTS AGE oci-cloud-controller-manager-6zppd 1/1 Running 0 18m
Retrieve the status of the role
csi-oci
pods.kubectl -n kube-system get pods -l role=csi-oci
Example Output:
[oracle@ocne-node01 ~]$ kubectl -n kube-system get pods -l role=csi-oci NAME READY STATUS RESTARTS AGE csi-oci-controller-7fcbddd746-fhnt5 4/4 Running 3 (17m ago) 19m csi-oci-node-6kjbw 3/3 Running 0 19m
Note: Wait for both of these commands to show the
STATUS
asRunning
before proceeding further.
If the values under theREADY
column do not show all of the containers as started , and those under theSTATUS
column do not show asRunning
after 15 minutes, please restart the lab.
Verify Access to Specific externalIPs
The externalip-validation-webhook-service
defaults to blocking all external IP addresses in the cluster, which causes the Verrazzano installation to fail because an IP address cannot be assigned to an ingress controller.
Show the Oracle Cloud Native Environment configuration file enables this service.
cat ~/myenvironment.yaml | grep restrict-service-externalip
The results show:
restrict-service-externalip
set totrue
restrict-service-externalip-cidrs
set to the CIDR IP address range of the OCI VCN subnet within the free lab environment.
During the free lab environment deployment of Oracle Cloud Native Environment, these settings enable the externalip webhook service on the specific CIDR block.
Set up Oracle Cloud Infrastructure Cloud Controller Manager Module Storage
The Oracle Cloud Infrastructure Cloud Controller Manager module does not elect a default StorageClass
or configure policies for the CSIDrivers
that it installs. A reasonable choice is the oci-bv
StorageClass
with its CSIDriver
configured with the File
group policy.
Patch the StorageClass making it the default.
kubectl patch sc oci-bv -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Example Output:
[oracle@ocne-node01 ~]$ kubectl patch sc oci-bv -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/oci-bv patched
Create the CSIDriver.
kubectl apply -f - <<EOF apiVersion: storage.k8s.io/v1 kind: CSIDriver metadata: name: blockvolume.csi.oraclecloud.com spec: fsGroupPolicy: File EOF
Example Output:
[oracle@ocne-node01 ~]$ kubectl apply -f - <<EOF > apiVersion: storage.k8s.io/v1 > kind: CSIDriver > metadata: > name: blockvolume.csi.oraclecloud.com > spec: > fsGroupPolicy: File > EOF csidriver.storage.k8s.io/blockvolume.csi.oraclecloud.com created
Install the Verrazzano Platform Operator
The Verrazzano platform operator manages the life cycle of the Verrazzano installation. Using the Verrazzano custom resource provides the means to install, uninstall and upgrade a Verrazzano installation.
The following steps install the Verrazzano platform operator.
Deploy the Verrazzano platform operator.
kubectl apply -f https://github.com/verrazzano/verrazzano/releases/download/v1.5.1/verrazzano-platform-operator.yaml
Example Output:
[oracle@ocne-node01 ~]$ kubectl apply -f https://github.com/verrazzano/verrazzano/releases/download/v1.5.1/verrazzano-platform-operator.yaml customresourcedefinition.apiextensions.k8s.io/verrazzanos.install.verrazzano.io created namespace/verrazzano-install created serviceaccount/verrazzano-platform-operator created clusterrole.rbac.authorization.k8s.io/verrazzano-managed-cluster created clusterrolebinding.rbac.authorization.k8s.io/verrazzano-platform-operator created service/verrazzano-platform-operator created service/verrazzano-platform-operator-webhook created deployment.apps/verrazzano-platform-operator created deployment.apps/verrazzano-platform-operator-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/verrazzano-mysql-backup created validatingwebhookconfiguration.admissionregistration.k8s.io/verrazzano-platform-operator-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/verrazzano-platform-mysqlinstalloverrides created validatingwebhookconfiguration.admissionregistration.k8s.io/verrazzano-platform-requirements-validator created
Wait for the deployment to complete.
kubectl -n verrazzano-install rollout status deployment/verrazzano-platform-operator
Example Output:
[oracle@ocne-node01 ~]$ kubectl -n verrazzano-install rollout status deployment/verrazzano-platform-operator deployment "verrazzano-platform-operator" successfully rolled out
Confirm that the operator pod is healthy and running.
kubectl -n verrazzano-install get pods
Example Output:
[oracle@ocne-node01 ~]$ kubectl -n verrazzano-install get pods NAME READY STATUS RESTARTS AGE verrazzano-platform-operator-6cc84dbfc5-f4zm4 1/1 Running 0 3m26s
Perform the Verrazzano Install
Verrazzano supports the following installation types: development (dev
), production (prod
) and managed cluster (managed-cluster
). This lab will set up the following:
- Development (
dev
) installation type. - Wildcard-DNS, where DNS is provided by nip.io (the default). Other DNS options are possible, see Customizing DNS in the documentation for more details.
Important Note: This lab uses self-signed certificates which is NOT recommended for production environments.
Install Verrazzano
Create the deployment.
kubectl apply -f - <<EOF apiVersion: install.verrazzano.io/v1beta1 kind: Verrazzano metadata: name: example-verrazzano spec: profile: ${VZ_PROFILE:-dev} EOF
Example Output:
[oracle@ocne-node01 ~]$ kubectl apply -f - <<EOF > apiVersion: install.verrazzano.io/v1beta1 > kind: Verrazzano > metadata: > name: example-verrazzano > spec: > profile: ${VZ_PROFILE:-dev} > EOF verrazzano.install.verrazzano.io/example-verrazzano created
Wait for the install to complete.
Note: This may take up to 20-30 minutes to complete
kubectl wait \ --timeout=20m \ --for=condition=InstallComplete verrazzano/example-verrazzano
A successful install returns:
Example Output:
[oracle@ocne-node01 ~]$ kubectl wait \ > --timeout=20m \ > --for=condition=InstallComplete verrazzano/example-verrazzano verrazzano.install.verrazzano.io/example-verrazzano condition met
(Optional) Monitor the pod install progress.
Open another terminal (or tab) and connect to the same
ocne-node01
node.Note: The output will continue updating every couple of seconds during the install.
watch kubectl get pods -n verrazzano-system
Example Output:
Every 2.0s: kubectl get pods -n verrazzano-system ocne-node01: Tue Aug 16 14:22:44 2022 NAME READY STATUS RESTARTS AGE coherence-operator-7b64c5c68d-6jrqq 1/1 Running 1 (6m30s ago) 7m10s fluentd-44jjz 0/2 Init:0/2 0 3m44s oam-kubernetes-runtime-789f94747c-bqz6s 1/1 Running 0 7m23s verrazzano-application-operator-6454c79bb7-s8fvb 1/1 Running 0 5m56s verrazzano-authproxy-6ddcfbf786-r6frv 2/2 Running 0 4m30s verrazzano-console-d67cb586c-65dkp 2/2 Running 0 3m44s verrazzano-monitoring-operator-59b98dc7d-nzxp4 2/2 Running 0 4m6s vmi-system-es-master-0 1/2 Running 0 2m37s vmi-system-grafana-5bdbb8959c-6gx5s 0/2 Pending 0 2m46s vmi-system-kiali-7c6b895df5-z2xvd 2/2 Running 0 4m23s vmi-system-kibana-59696944f9-tgxth 1/2 Running 0 2m22s vmi-system-prometheus-0-8464c88847-tcvp9 3/3 Running 0 2m48s weblogic-operator-6b7cff9f7d-mhdvb 2/2 Running 0 5m53s
(Optional) Monitor the Verrazzano installation logs.
Open another terminal (or tab) and connect to the same
ocne-node01
node.kubectl logs -n verrazzano-install \ -f $(kubectl get pod \ -n verrazzano-install \ -l app=verrazzano-platform-operator \ -o jsonpath="{.items[0].metadata.name}") | grep '^{.*}$' \ | jq -r '."@timestamp" as $timestamp | "\($timestamp) \(.level) \(.message)"'
Example Output (excerpt):
... 2022-08-16T14:15:29.793Z info Component coherence-operator install started 2022-08-16T14:15:29.794Z info Running Helm command /usr/bin/helm upgrade coherence-operator /verrazzano platform-operator/thirdparty/charts/coherence-operator --namespace verrazzano-system --install -f /verrazzano platform-operator/helm_config/overrides/coherence-values.yaml -f /tmp/helm-overrides-3541626097.yaml for release coherence-operator 2022-08-16T14:15:31.184Z info Component mysql is waiting for deployment istio-system/istiod replicas to be 1. Current available replicas is 0 2022-08-16T14:15:31.184Z info Component mysql waiting for dependencies [istio] to be ready 2022-08-16T14:15:31.185Z info Component keycloak is waiting for deployment istio-system/istiod replicas to be 1. Current available replicas is 0 2022-08-16T14:15:31.640Z info Component keycloak is waiting for deployment cert-manager/cert-manager replicas to be 1. Current available replicas is 0 2022-08-16T14:15:31.640Z info Component keycloak waiting for dependencies [istio ingress-controller cert-manager] to be ready 2022-08-16T14:15:31.640Z info Component kiali-server is waiting for deployment istio-system/istiod replicas to be 1.Current available replicas is 0 2022-08-16T14:15:32.017Z info Component kiali-server is waiting for deployment cert-manager/cert-manager replicas to be 1. Current available replicas is 0 2022-08-16T14:15:32.018Z info Component kiali-server waiting for dependencies [istio ingress-controller cert-manager] to be ready 2022-08-16T14:15:32.249Z info Component oam-kubernetes-runtime has enough replicas for deployment verrazzano-system/oam-kubernetes-runtime 2022-08-16T14:15:32.249Z info Component oam-kubernetes-runtime post-install is running ...
Verify the Install
At this point, Verrazzano has installed multiple objects into multiple namespaces. If the verrazzano-system
namespace reports all pods in a Running
state, this indicates but does not guarantee that Verrazzano is up and running. The following steps assure that the Verrazzano system is functioning as expected.
Confirm that the
verrazzano-system
namespace reports all pods in aRunning
state.kubectl get pods -n verrazzano-system
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -n verrazzano-system NAME READY STATUS RESTARTS AGE coherence-operator-7b64c5c68d-c2tz7 1/1 Running 1 (18m ago) 18m fluentd-62r6k 2/2 Running 0 15m oam-kubernetes-runtime-789f94747c-6vv5v 1/1 Running 0 18m verrazzano-application-operator-6454c79bb7-krs6t 1/1 Running 0 17m verrazzano-authproxy-7ffcb488b6-ll9gm 2/2 Running 0 16m verrazzano-console-5cbc7d7fb7-5xjxz 2/2 Running 0 15m verrazzano-monitoring-operator-59b98dc7d-h7cpx 2/2 Running 0 16m vmi-system-es-master-0 2/2 Running 0 15m vmi-system-grafana-57877dc795-kt4g4 2/2 Running 0 15m vmi-system-kiali-c5d87bbcb-nltpc 2/2 Running 0 16m vmi-system-kibana-59696944f9-zgdmh 2/2 Running 0 15m vmi-system-prometheus-0-85ccc9bd57-7wjfk 3/3 Running 0 15m weblogic-operator-6b7cff9f7d-6stgk 2/2 Running 0 17m
Get the Console URLs
Retrieve the endpoint for the console URLs installed by Verrazzano.
kubectl get vz -o yaml
Example Output (excerpt):
... conditions: - lastTransitionTime: "2022-08-17T10:01:58Z" message: Verrazzano install in progress status: "True" type: InstallStarted - lastTransitionTime: "2022-08-17T10:10:52Z" message: Verrazzano install completed successfully status: "True" type: InstallComplete instance: consoleUrl: https://verrazzano.default.144.24.175.97.nip.io elasticUrl: https://elasticsearch.vmi.system.default.144.24.175.97.nip.io grafanaUrl: https://grafana.vmi.system.default.144.24.175.97.nip.io keyCloakUrl: https://keycloak.default.144.24.175.97.nip.io kialiUrl: https://kiali.vmi.system.default.144.24.175.97.nip.io kibanaUrl: https://kibana.vmi.system.default.144.24.175.97.nip.io prometheusUrl: https://prometheus.vmi.system.default.144.24.175.97.nip.io rancherUrl: https://rancher.default.144.24.175.97.nip.io ...
It is also possible to use the
jq
tool to return the instance URLs more succinctly.kubectl get vz -o jsonpath="{.items[].status.instance}" | jq .
Example Output:
[oracle@ocne-node01 ~]$ kubectl get vz -o jsonpath="{.items[].status.instance}" | jq . { "consoleUrl": "https://verrazzano.default.144.24.175.97.nip.io", "elasticUrl": "https://elasticsearch.vmi.system.default.144.24.175.97.nip.io", "grafanaUrl": "https://grafana.vmi.system.default.144.24.175.97.nip.io", "keyCloakUrl": "https://keycloak.default.144.24.175.97.nip.io", "kialiUrl": "https://kiali.vmi.system.default.144.24.175.97.nip.io", "kibanaUrl": "https://kibana.vmi.system.default.144.24.175.97.nip.io", "prometheusUrl": "https://prometheus.vmi.system.default.144.24.175.97.nip.io", "rancherUrl": "https://rancher.default.144.24.175.97.nip.io" }
(Optional) Check the status with the Verrazzano CLI.
The list of commands below download, unpack, install, and run the
vz
CLI.cd ~ curl -LO https://github.com/verrazzano/verrazzano/releases/download/v1.5.1/verrazzano-1.5.1-linux-amd64.tar.gz tar xvf verrazzano-1.5.1-linux-amd64.tar.gz sudo cp verrazzano-1.5.1/bin/vz /usr/local/bin vz status
Example Output:
[oracle@ocne-node01 ~]$ vz status Verrazzano Status Name: example-verrazzano Namespace: default Profile: dev Version: 1.5.1 State: Ready Available Components: 23/23 Access Endpoints: consoleUrl: https://verrazzano.default.152.70.166.236.nip.io grafanaUrl: https://grafana.vmi.system.default.152.70.166.236.nip.io keyCloakUrl: https://keycloak.default.152.70.166.236.nip.io kialiUrl: https://kiali.vmi.system.default.152.70.166.236.nip.io openSearchDashboardsUrl: https://osd.vmi.system.default.152.70.166.236.nip.io openSearchUrl: https://opensearch.vmi.system.default.152.70.166.236.nip.io prometheusUrl: https://prometheus.vmi.system.default.152.70.166.236.nip.io rancherUrl: https://rancher.default.152.70.166.236.nip.io
Retrieve the Console Credentials
Verrazzano configures the various consoles during the installation with a default username and unique passwords. As these vary by product, use the details below to retrieve the username and password as required.
Consoles accessed by the same username/password.
- Grafana
- Prometheus
- OpenSearch Dashboards
- OpenSearch
- Kiali
The username for each console above is:
verrazzano
Retrieve the password:
kubectl get secret \ --namespace verrazzano-system verrazzano \ -o jsonpath={.data.password} | base64 \ --decode; echo
Example Output:
[oracle@ocne-node01 ~]$ kubectl get secret \ > --namespace verrazzano-system verrazzano \ > -o jsonpath={.data.password} | base64 \ > --decode; echo ENRzXfxOFaElTh9Q
Access for the Keycloak admin console.
The username is:
keycloakadmin
Retrieve the password:
kubectl get secret \ --namespace keycloak keycloak-http \ -o jsonpath={.data.password} | base64 \ --decode; echo
Example Output:
[oracle@ocne-node01 ~]$ kubectl get secret \ > --namespace keycloak keycloak-http \ > -o jsonpath={.data.password} | base64 \ > --decode; echo yMntUqM9Zq5iJxV
Access for the Rancher console.
The username is:
admin
The password can be retrieved using:
kubectl get secret \ --namespace cattle-system rancher-admin-secret \ -o jsonpath={.data.password} | base64 \ --decode; echo
Example Output:
[oracle@ocne-node01 ~]$ kubectl get secret \ > --namespace cattle-system rancher-admin-secret \ > -o jsonpath={.data.password} | base64 \ > --decode; echo zfS0l93_NBJqjs5XpCmF
Important: Keep the terminal open because these details will be used in the next section.
Access the Verrazzano, Keycloak and Rancher Consoles
Although several consoles are provided by Verrazzano the next steps confirm that the login credentials retrieved in the last step work. The first of which is to connect to the Verrazzano Console.
Ensure that the console URLs and passwords retrieved previously are available in the console (see below) - if they are not please repeat the previous steps before proceeding.
Whilst there is recommended order to access the Verrazzano consoles, because several other consoles (Grafana, Prometheus, OpenSearch, OpenSearch Dashboards and Kiali) also use the same user/password details this makes an ideal place to start.
Open the Verrazzano Console
Open the Verrazzano console by right-clicking on the URL to the right of the consoleURL value displayed on the Terminal screen, click on the Open Link option.
A new tab opens in the Luna Desktop's browser with a privacy notice, click on the button marked Advanced.
Click on the URL to continue to the console.
Repeat these last two steps if prompted similarly for the Keycloak URL. Keycloak handles the Single Sign-On (SSO) authentication for several of the consoles provided.
The Verrazzano console login screen is displayed.
Enter the following credentials:
The default Username of
verrazzano
NOTE: the case IS important
The first of the three password details retrieved in the previous section.
Hint: the one from the --namespace verrazzano-system command).
Click on the Sign In button.
The Verrazzano Console is displayed.
The URLs under the section called System Telemetry require no further login credentials to be displayed. The Rancher and Keycloak consoles however do require different credentials to be supplied - these will be accessed in the next sections.
Open the Keycloak Console
Click on the URL called Keycloak (under the section called General Information).
The KeyCloak Login screen is displayed.
Click on the heading labeled Administration Console.
Enter the following credentials:
The default Username of
keycloakadmin
NOTE: the case IS important
The second of the three password details retrieved in the previous section.
Hint: the one from the --namespace keycloak command.
Click on the Sign In button.
The Keycloak console is displayed.
Important: Changing information within this console is not part of this lab. Information related to using Keycloak can be found in the Verrazzano documentation .
Open the Rancher Console
Click on the Verrazzano browser tab, and then click on the URL labeled Rancher (under the section labeled General Information).
A new tab opens in the Luna Desktop's browser with a privacy notice, click on the button marked Advanced.
Click on the URL to continue to the console.
The Rancher login screen is displayed.
Enter the following credentials:
The third of the three password details retrieved in the previous section.
Hint: the one from the --namespace cattle-system rancher-admin-secret command).
Click on the Log in with Local User button.
The Welcome to Rancher! screen is displayed.
Check the checkbox next to I agree to the terms and conditions for using Rancher, and then click the Continue button.
The Rancher console is displayed.
Summary
This confirms that Verrazzano has been installed and configured correctly and is accepting requests successfully.