Use OCI Cloud Controller Manager on Oracle Cloud Native Environment
Introduction
The Kubernetes LoadBalancer Service exposes the Deployment externally using a cloud provider's load balancer. The dependent NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
This tutorial shows how to deploy the Oracle Cloud Infrastructure Cloud Controller Manager module (OCI-CCM module) within Oracle Cloud Native Environment to handle requests for an external LoadBalancer Service type. The Oracle Cloud Infrastructure Cloud Controller Manager module uses the open source oci-cloud-controller-manager project, which is a Kubernetes Cloud Controller Manager implementation (or out-of-tree cloud-provider) for Oracle Cloud Infrastructure (OCI).
Objectives
- Deploy the Oracle Cloud Infrastructure Cloud Controller Manager module
- Create a Deployment and LoadBalancer Service
- Verify access through the LoadBalancer Service
Prerequisites
An Oracle Linux 8 or later system with the following configuration:
- a non-root user with
sudo
privileges - Oracle Cloud Native Environment installed and configured
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys a compact Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 8-10 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
If not already connected, open a terminal and connect via ssh to the ocne-node01 system.
ssh oracle@<ip_address_of_ol_node>
Confirm the environment is ready.
kubectl get pods -A
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validation-system externalip-validation-webhook-7988bff847-8ws2v 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-qxqth 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-r9bgj 1/1 Running 0 3m18s kube-system etcd-ocne-node01 1/1 Running 0 3m37s kube-system kube-apiserver-ocne-node01 1/1 Running 0 3m37s kube-system kube-controller-manager-ocne-node01 1/1 Running 0 3m37s kube-system kube-flannel-ds-vcwzn 1/1 Running 0 3m18s kube-system kube-proxy-7lx59 1/1 Running 0 3m18s kube-system kube-scheduler-ocne-node01 1/1 Running 0 3m37s kubernetes-dashboard kubernetes-dashboard-5d5d4947b5-7pffh 1/1 Running 0 3m18s
Open HealthCheck Port on Oracle Linux Firewall
When using a LoadBalancer Service associated with OCI-CCM, Kubernetes expects a health check endpoint available on port 10256. Therefore, kube-proxy
creates a listener on this port so that the cloud provider load balancer can verify that kube-proxy
is healthy. This health check is how the load balancer determines which nodes can have traffic routed to them.
Set the firewall rules for the node.
sudo firewall-cmd --add-port=10256/tcp --permanent sudo firewall-cmd --reload
When working in a clustered environment, open this firewall port on all of the control plane and worker nodes.
Deploy the Oracle Cloud Infrastructure Cloud Controller Manager Module
Append the
helm
andoci-ccm
modules to the existing configuration file.tee -a ~/myenvironment.yaml > /dev/null << 'EOF' - module: helm name: myhelm args: helm-kubernetes-module: mycluster - module: oci-ccm name: myoci oci-ccm-helm-module: myhelm oci-use-instance-principals: true oci-compartment: oci-vcn: oci-lb-subnet1: EOF
The free lab environment uses Policies to allow using Instance Principals, which enable instances to be authorized actors (or principals) to perform actions on service resources.
The Oracle Cloud Infrastructure Cloud Controller Manager module uses the option
oci-use-instance-principals: true
for authentication as a default setting.For more details, see the Using the Oracle Cloud Infrastructure Load Balancer documentation on how to pass credential information related to your Oracle Cloud Infrastructure tenancy rather than using Instance Principals.
Add the required OCIDs within the configuration file.
Switch from the Terminal to the free lab desktop.
Open the Luna Lab details page using the Luna Lab icon from the free lab desktop.
Click the Oracle Cloud tab.
Scroll down and find the Compartment OCID and copy it.
Switch to the previous open Terminal.
Open the configuration file using your text editor of choice. Here we'll use
vi
.vi ~/myenvironment.yaml
Enter
vi
insert mode by typingi
.Add the Compartment OCID to the end of the oci-compartment: line.
Note: Because this is YAML, remember to add a space before pasting the value.
Example:
- module: oci-ccm name: myoci oci-ccm-helm-module: myhelm oci-use-instance-principals: true oci-compartment: ocid1.compartment.oc1..aaaaaaaamqicnhi7e6dj7fwtbiibfxxlzjpd3uf33f7a33gftzgpchrnuzna oci-vcn: oci-lb-subnet1:
Switch to the Luna Lab details page and click the Resources tab.
Find the vcn_ocid and copy it.
Switch back to the Terminal.
Add the vcn_ocid to the end of the oci-vcn: line.
Note: Because this is YAML, remember to add a space before pasting the value.
Example:
- module: oci-ccm name: myoci oci-ccm-helm-module: myhelm oci-use-instance-principals: true oci-compartment: ocid1.compartment.oc1..aaaaaaaamqicnhi7e6dj7fwtbiibfxxlzjpd3uf33f7a33gftzgpchrnuzna oci-vcn: ocid1.vcn.oc1.eu-frankfurt-1.amaaaaaar5cqh7qam56nztotyx4xzhovuo7stl5dddlmdmubdcdam64sadka oci-lb-subnet1:
Switch back to the Luna Lab details page.
Find the vcn_subnet_ocid and copy it.
Switch back to the Terminal.
Add the vcn_subnet_ocid to the end of the oci-lb-subnet1: line.
Note: Because this is YAML, remember to add a space before pasting the value.
Example:
- module: oci-ccm name: myoci oci-ccm-helm-module: myhelm oci-use-instance-principals: true oci-compartment: ocid1.compartment.oc1..aaaaaaaamqicnhi7e6dj7fwtbiibfxxlzjpd3uf33f7a33gftzgpchrnuzna oci-vcn: ocid1.vcn.oc1.eu-frankfurt-1.amaaaaaar5cqh7qam56nztotyx4xzhovuo7stl5dddlmdmubdcdam64sadka oci-lb-subnet1: ocid1.subnet.oc1.eu-frankfurt-1.aaaaaaaazq3yaeofyv3azmnzm2cxrilnfhpmvhark7xw5u6eo3574mtbzswa
Save and close the file. If using
vi
, you can do that by typingESC
,:wq!
andENTER
.
IMPORTANT: Ensure the OCIDs are correct per your environment. If the values are incorrect, the
oci-ccm
module installs, but will fail to create a LoadBalancer when requested by the Service.Create and Install modules.
olcnectl module create --config-file myenvironment.yaml olcnectl module validate --config-file myenvironment.yaml olcnectl module install --config-file myenvironment.yaml
Create Deployment and Service
Generate configuration file for the deployment and service.
tee echo-oci-lb.yml > /dev/null << 'EOF' --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-deployment labels: app: echo1 spec: replicas: 2 selector: matchLabels: app: echo1 template: metadata: labels: app: echo1 spec: containers: - name: echoserver image: k8s.gcr.io/echoserver:1.4 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: echo-lb-service annotations: service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "None" service.beta.kubernetes.io/oci-load-balancer-internal: "false" service.beta.kubernetes.io/oci-load-balancer-shape: "10Mbps" spec: selector: app: echo1 type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 EOF
Create deployment and service.
kubectl create -f echo-oci-lb.yml
Verify Creation of Deployment and Service
Get a list of Deployments.
kubectl get deployment
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE echo-deployment 2/2 2 2 15s
Get a list of Services.
kubectl get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-lb-service LoadBalancer 10.108.35.18 <pending> 80:32162/TCP 23s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m
The 'pending' under EXTERNAL-IP for the echo-lb-service exists until the Oracle Cloud Infrastructure Cloud Controller Manager module creates and starts the OCI LoadBalancer.
Repeat running the
kubectl get service
command until the output shows the EXTERNAL-IP address. It should appear within 1-2 minutes, maximum.The PORTS indicate the OCI LoadBalancer listening port (80) and the auto-generated Kubernetes NodePort secondary service port.
Get a list of Endpoints.
kubectl get endpoints
Example Output:
NAME ENDPOINTS AGE echo-lb-service 10.244.0.7:8080,10.244.0.8:8080 5m37s kubernetes 10.0.0.140:6443 18m
An Endpoint is a resource referenced by a Kubernetes Service. The resource tracks the IP addresses and ports of one or more Pods dynamically assigned during Service creation.
Verify Application
Test on a Control Plane Node
With a LoadBalancer Service type, the IP address and port to test is the EXTERNAL-IP address and port of the service, which is the OCI LoadBalancer Listener. The request is then sent to the OCI LoadBalancer Backend. The OCI LoadBalancer Backend then routes traffic to the secondary NodePort Service running on the specific node and then to the Pod.
Using this specific test requires a node where kubectl
exists.
Use JSONPath to assign the LoadBalancer listener a variable.
LB=$(kubectl get svc -o jsonpath="{.status.loadBalancer.ingress[0].ip}" echo-lb-service)
Use JSONPath to assign the LoadBalancer port a variable.
LBPORT=$(kubectl get svc -o jsonpath="{.spec.ports[0].port}" echo-lb-service)
Test the application.
curl -i -w "\n" $LB:$LBPORT
Example Output:
[oracle@ocne-node01 ~]$ curl -i -w "\n" $LB:$LBPORT HTTP/1.1 200 OK Server: nginx/1.10.0 Date: Wed, 06 Jul 2022 16:41:23 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive CLIENT VALUES: client_address=10.244.0.1 command=GET real path=/ query=nil request_version=1.1 request_uri=http://130.162.210.115:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.0.0.138 user-agent=curl/7.61.1 BODY: -no body in request-
The IP address of
130.162.210.115
shown in the output above is the EXTERNAL-IP address for the OCI LoadBalancer. This address is different on each deployment of the lab. Verify the load balancer address by optionally logging into the Cloud Console and navigating to Networking > Load Balancers.
Test from Luna Desktop
Using this specific test requires a node outside the Kubernetes cluster.
Find the
EXTERNAL-IP
value for theecho-lb-service
.kubectl get service
Example Output
[oracle@ocne-node01 ~]$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-lb-service LoadBalancer 10.102.1.165 130.162.210.115 80:31468/TCP 32s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
Using the browser on the Luna Desktop, open a new tab.
Enter the value returned in the
EXTERNAL-IP
column and press Enter.NOTE: The client_address shown in the output is the gateway IP address associated with the cni0 interface. The cni0 interface manages the Kubernetes Cluster Networking as covered in the Network Plugins upstream documentation.
Summary
We now understand how to create a LoadBalancer Service within Kubernetes. Checkout and self-explore additional features of Oracle Cloud Native Environment using the Oracle Cloud Infrastructure Cloud Controller Manager module and the available Load Balancer Annotations .