Run the Ingress-Nginx Controller using MetalLB on Oracle Cloud Native Environment
Introduction
Network load balancers such as MetalLB provide a method of externally exposing Kubernetes applications. A Kubernetes LoadBalancer service creates a network load balancer that provides and exposes an external IP address for connecting to an application from outside the cluster.
With the external IP address exposed, administrators of a Kubernetes cluster can then manage access to applications via an Ingress . An Ingress is a Kubernetes API object that manages external access to a cluster's services. The Ingress-Nginx Controller uses NGINX as a reverse proxy and load balancer that can load-balance Websocket, gRPC, TCP, and UDP applications.
Objectives
In this tutorial, you will learn:
- How to install and configure MetalLB
- How to install and access the Ingress-Nginx Controller
Prerequisites
- Installation of Oracle Cloud Native Environment
- a single control node and four worker nodes
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne2
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e install_ocne_rpm=true -e create_ocne_cluster=true -e "ocne_cluster_node_options='-n 1 -w 1'"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
Open a terminal and connect via SSH to the ocne instance.
ssh oracle@<ip_address_of_node>
List the nodes in the cluster.
kubectl get nodes
The output confirms both worker nodes are in a Ready state.
Install MetalLB and the Ingress-Nginx Controller
Using the ingress-nginx controller requires a properly working Load Balancer Service. When using the Oracle Cloud Native Environment libvirt provider to create our cluster, we can leverage MetalLB to provide this functionality. If using the oci provider, then you can use oci-ccm.
Search the Oracle catalog for the requested applications.
ocne catalog search | grep 'metallb\|nginx'
Install the metallb application.
ocne application install --release metallb --namespace metallb --name metallb
Install the ingress-nginx application.
ocne application install --release ingress-nginx --namespace ingress-nginx --name ingress-nginx
Verify the deployments exist.
kubectl get deployment -A
The output shows the metallb-controller is ready, however, the *ingress-nginx-controller isn't. This behavior occurs because we still need to configure MetalLB.
Create an Address Pool for MetalLB
MetalLB uses this pool to assign addresses to the various services that use it. This example uses only a single address to simplify this demonstration.
Set the IP address.
The IP address must exist in the range KVM uses to assign IP addresses to the libvirt virtual machines of your Oracle Cloud Native Environment cluster. The default KVM NAT-based subnet is typically 192.168.122.1.
export IP=192.168.122.250
Apply the IPAddressPool.
kubectl apply -f - << EOF apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: registry-pool namespace: metallb spec: addresses: - $IP/32 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: registry namespace: metallb spec: ipAddressPools: - registry-pool EOF
Verify the ingress-nginx-controller service is running.
Since MetalLB has an address pool, it can provide an address to the Ingress-Nginx controller.
kubectl get deployment -n ingress-nginx
Rerun the command until the output shows the ingress-nginx-controller status as ready.
Confirm the Load Balancer service has an external ip address.
kubectl get svc -n ingress-nginx
The EXTERNAL-IP column shows the IP address from our MetalLB address pool.
Use the Ingress
Test the Ingress by creating two services and using the Ingress-NGINX Controller to demonstrate how it routes the request to the correct deployment. We’ll use the http-echo container as the web application, which allows us to output a slightly different response.
Create the first Pod.
cat << EOF | tee coffee.yaml > /dev/null kind: Pod apiVersion: v1 metadata: name: coffee-app labels: app: coffee spec: containers: - name: coffee-app image: docker.io/hashicorp/http-echo args: - "-text=coffee" --- kind: Service apiVersion: v1 metadata: name: coffee-service spec: selector: app: coffee ports: - port: 5678 # Default port for image EOF
Create the second Pod.
cat << EOF | tee tea.yaml > /dev/null kind: Pod apiVersion: v1 metadata: name: tea-app labels: app: tea spec: containers: - name: tea-app image: docker.io/hashicorp/http-echo args: - "-text=tea" --- kind: Service apiVersion: v1 metadata: name: tea-service spec: selector: app: tea ports: - port: 5678 # Default port for image EOF
Create the resources.
kubectl apply -f coffee.yaml kubectl apply -f tea.yaml
Create the Ingress definition file.
Next, you will create an Ingress definition to route incoming requests to either the
/coffee
or the/tea
service.cat << EOF | tee ingress.yaml > /dev/null apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-ingress spec: ingressClassName: nginx rules: - http: paths: - path: /coffee pathType: Prefix backend: service: name: coffee-service port: number: 5678 - path: /tea pathType: Prefix backend: service: name: tea-service port: number: 5678 EOF
Create the Ingress.
kubectl create -f ingress.yaml
Verify the creation of the Ingress.
watch kubectl get ingress demo-ingress
Wait for the IP address of the Ingress to appear. Then, exit the
watch
command usingCtrl-C
.Assign the Ingress load balancer IP address to a variable.
INGRESS=$(kubectl get ingress demo-ingress -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
Verify the Ingress.
Test that everything works as expected. First, test the coffee service:
curl -kL http://$INGRESS/coffee
Next, test the tea service:
curl -kL http://$INGRESS/tea
Last, test what happens if you test a non-existent service:
curl -kL http://$INGRESS/biscuit
You will get a 404 error message because there is no mapping or application for
biscuit
.
Next Steps
A Kubernetes Ingress provides a robust way to expose deployed and available services on your Oracle Cloud Native Environment cluster to your users. Rules you define in the Ingress resource determine the HTTP and HTTPS traffic routing. An Ingress does not support the use of non-HTTP or HTTPS protocols. If you wish to use a non-HTTP or HTTPS service, use either a LoadBalancer or NodePort service type instead.