Use OCI Cloud Controller Manager on Oracle Cloud Native Environment
Introduction
The Kubernetes LoadBalancer Service exposes the Deployment externally using a cloud provider's load balancer. The dependent NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
This tutorial shows how to deploy the Oracle Cloud Infrastructure Cloud Controller Manager module (OCI-CCM module) within Oracle Cloud Native Environment to handle requests for an external LoadBalancer Service type. The Oracle Cloud Infrastructure Cloud Controller Manager module uses the open source oci-cloud-controller-manager project, which is a Kubernetes Cloud Controller Manager implementation (or out-of-tree cloud-provider) for Oracle Cloud Infrastructure (OCI).
Objectives
At the end of this tutorial, you should be able to do the following:
- Deploy the Oracle Cloud Infrastructure Cloud Controller Manager module
- Create a Deployment and LoadBalancer Service
- Verify access through the LoadBalancer Service
Prerequisites
An Oracle Linux instance with the following configuration:
- a non-root user with
sudo
privileges - Oracle Cloud Native Environment installed and configured
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys a compact Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 10-15 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
If not already connected, open a terminal and connect via ssh to the ocne-node01 system.
ssh oracle@<ip_address_of_ol_node>
Confirm the environment is ready.
kubectl get pods -A
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validation-system externalip-validation-webhook-7988bff847-8ws2v 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-qxqth 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-r9bgj 1/1 Running 0 3m18s kube-system etcd-ocne-node01 1/1 Running 0 3m37s kube-system kube-apiserver-ocne-node01 1/1 Running 0 3m37s kube-system kube-controller-manager-ocne-node01 1/1 Running 0 3m37s kube-system kube-flannel-ds-vcwzn 1/1 Running 0 3m18s kube-system kube-proxy-7lx59 1/1 Running 0 3m18s kube-system kube-scheduler-ocne-node01 1/1 Running 0 3m37s kubernetes-dashboard kubernetes-dashboard-5d5d4947b5-7pffh 1/1 Running 0 3m18s
Open HealthCheck Port on Oracle Linux Firewall
When using a LoadBalancer Service associated with OCI-CCM, Kubernetes expects a health check endpoint available on port 10256. Therefore, kube-proxy
creates a listener on this port so that the cloud provider load balancer can verify that kube-proxy
is healthy. This health check is how the load balancer determines which nodes can have traffic routed to them.
Set the firewall rules for the node.
sudo firewall-cmd --add-port=10256/tcp --permanent sudo firewall-cmd --reload
When working in a clustered environment, open this firewall port on all of the control plane and worker nodes.
Deploy the Oracle Cloud Infrastructure Cloud Controller Manager Module
Determine the Oracle Cloud Infrastructure authentication method.
The free lab environment uses Policies to allow using Instance Principals, which enable instances to be authorized actors (or principals) to perform actions on service resources.
The Oracle Cloud Infrastructure Cloud Controller Manager module uses the option
oci-use-instance-principals: true
for authentication as a default setting.For more details, see the Using the Oracle Cloud Infrastructure Load Balancer documentation on how to pass credential information related to your Oracle Cloud Infrastructure tenancy rather than using Instance Principals.
Add the required Oracle Cloud IDs (OCIDs) to environment variables.
These OCID values are required to instruct the oci-ccm module how to communicate with the specified compartment within a tenancy. For details on OCIDs, see the OCI Resource Identifiers documentation.
The free lab enviroment adds the following environment variables to the
oracle
users.bashrc
file:COMPARTMENT_OCID
: configures the Compartment within which the cluster resides.VCN_OCID
: configures the Virtual Cloud Network (VCN) within which the cluster resides.LB_SUBNET_OCID
: configures the VCN subnet to which load balancers will be added.
IMPORTANT: Ensure the OCIDs are correct per your environment. If the values are incorrect, the
oci-ccm
module installs, but will fail to create a LoadBalancer when requested by the Service.Confirm the OCID environment variables exist.
env | grep OCID
Avoid having to use the
--api-server
flag in future olcnectl commands, get a list of the module instances and add the--update-config
flag.olcnectl module instances \ --config-file myenvironment.yaml \ --update-config
Create the module.
olcnectl module create \ --environment-name myenvironment \ --module oci-ccm \ --name myoci \ --oci-ccm-kubernetes-module mycluster \ --oci-use-instance-principals true \ --oci-compartment $COMPARTMENT_OCID \ --oci-vcn $VCN_OCID \ --oci-lb-subnet1 $LB_SUBNET_OCID
Install the module.
olcnectl module install \ --environment-name myenvironment \ --name myoci
Create Deployment and Service
Generate configuration file for the deployment and service.
tee echo-oci-lb.yml > /dev/null << 'EOF' --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-deployment labels: app: echo1 spec: replicas: 2 selector: matchLabels: app: echo1 template: metadata: labels: app: echo1 spec: containers: - name: echoserver image: k8s.gcr.io/echoserver:1.4 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: echo-lb-service annotations: service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "None" service.beta.kubernetes.io/oci-load-balancer-internal: "false" service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10Mbps" service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "10Mbps" spec: selector: app: echo1 type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 EOF
Create deployment and service.
kubectl create -f echo-oci-lb.yml
Verify Creation of Deployment and Service
Get a list of Deployments.
kubectl get deployment
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE echo-deployment 2/2 2 2 15s
Get a list of Services.
kubectl get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-lb-service LoadBalancer 10.108.35.18 <pending> 80:32162/TCP 23s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m
The 'pending' under EXTERNAL-IP for the echo-lb-service exists until the Oracle Cloud Infrastructure Cloud Controller Manager module creates and starts the OCI LoadBalancer.
Repeat running the
kubectl get service
command until the output shows the EXTERNAL-IP address. It should appear within 1-2 minutes, maximum.The PORTS indicate the OCI LoadBalancer listening port (80) and the auto-generated Kubernetes NodePort secondary service port.
Get a list of Endpoints.
kubectl get endpoints
Example Output:
NAME ENDPOINTS AGE echo-lb-service 10.244.0.7:8080,10.244.0.8:8080 5m37s kubernetes 10.0.0.140:6443 18m
An Endpoint is a resource referenced by a Kubernetes Service. The resource tracks the IP addresses and ports of one or more Pods dynamically assigned during Service creation.
Verify Application
Test on a Control Plane Node
With a LoadBalancer Service type, the IP address and port to test is the EXTERNAL-IP address and port of the service, which is the OCI LoadBalancer Listener. The request is then sent to the OCI LoadBalancer Backend. The OCI LoadBalancer Backend then routes traffic to the secondary NodePort Service running on the specific node and then to the Pod.
Using this specific test requires a node where kubectl
exists.
Use JSONPath to assign the LoadBalancer listener a variable.
LB=$(kubectl get svc -o jsonpath="{.status.loadBalancer.ingress[0].ip}" echo-lb-service)
Use JSONPath to assign the LoadBalancer port a variable.
LBPORT=$(kubectl get svc -o jsonpath="{.spec.ports[0].port}" echo-lb-service)
Test the application.
curl -i -w "\n" $LB:$LBPORT
Example Output:
[oracle@ocne-node01 ~]$ curl -i -w "\n" $LB:$LBPORT HTTP/1.1 200 OK Server: nginx/1.10.0 Date: Wed, 06 Jul 2022 16:41:23 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive CLIENT VALUES: client_address=10.244.0.1 command=GET real path=/ query=nil request_version=1.1 request_uri=http://130.162.210.115:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.0.0.138 user-agent=curl/7.61.1 BODY: -no body in request-
The IP address of
130.162.210.115
shown in the output above is the EXTERNAL-IP address for the OCI LoadBalancer. This address is different on each deployment of the lab. Verify the load balancer address by optionally logging into the Cloud Console and navigating to Networking > Load Balancers.
Test from Luna Desktop
Using this specific test requires a node outside the Kubernetes cluster.
Find the
EXTERNAL-IP
value for theecho-lb-service
.kubectl get service
Example Output
[oracle@ocne-node01 ~]$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-lb-service LoadBalancer 10.102.1.165 130.162.210.115 80:31468/TCP 32s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
Using the browser on the Luna Desktop, open a new tab.
Enter the value returned in the
EXTERNAL-IP
column and press Enter.NOTE: The client_address shown in the output is the gateway IP address associated with the cni0 interface. The cni0 interface manages the Kubernetes Cluster Networking as covered in the Network Plugins upstream documentation.
Summary
We now understand how to create a LoadBalancer Service within Kubernetes. Checkout and self-explore additional features of Oracle Cloud Native Environment using the Oracle Cloud Infrastructure Cloud Controller Manager module and the available Load Balancer Annotations .