Enable Istio Proxy Sidecar Injection in Oracle Cloud Native Environment
Introduction
Istio is a service mesh that provides a separate infrastructure layer for inter-service communication. Network communication is abstracted from the services themselves and handled by proxies. Istio uses a sidecar design, which means that communication proxies run in their own containers beside every service container. An application's namespace must use the istio-injection=enabled
label to have Istio's automatic sidecar injection take effect.
Objectives
At the end of this tutorial, you should be able to do the following:
- Install the Istio module.
- Deploy an application without automatic proxy sidecar injection enabled.
- Remove the deployment.
- Enable automatic proxy sidecar injection.
- Deploy the same application again using an associated Istio sidecar proxy.
Prerequisites
- Minimum of a 9-node Oracle Cloud Native Environment cluster:
Operator node
3 Kubernetes control plane nodes
5 Kubernetes worker nodes
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e use_oci_ccm=true
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install Istio
Open a terminal and connect via SSH to the ocne-operator node.
ssh oracle@<ip_address_of_node>
List the module instances.
olcnectl module instances --environment-name myenvironment
The output displays a list showing the Kubernetes and oci-ccm modules, the control plane, and worker nodes. The Istio module requires the oci-ccm module to handle the ingress gateway Load Balancer service type when running on Oracle Cloud Infrastructure.
Create a custom Istio configuration file.
This file creates the Istio ingress gateway by applying the appropriate Oracle Cloud Infrastructure Cloud Controller Manager (
oci-ccm
) module annotations to theistio-ingressgateway
service. These specific annotations create a private Load balancer with a flexible shape.cat << EOF | tee istio-lb.yaml > /dev/null components: ingressGateways: - name: istio-ingressgateway k8s: serviceAnnotations: service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "None" service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10" service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "10" EOF
Create the Istio Module
olcnectl module create --environment-name myenvironment --module istio --name myistio --istio-kubernetes-module mycluster --istio-profile=istio-lb.yaml
Install the Istio Module
olcnectl module install --environment-name myenvironment --name myistio
Note: This takes 3-5 minutes to complete.
Verify Istio is Running
The Istio module for Oracle Cloud Native Environment installs the grafana
, prometheus-server
, ingressgateway
, and egressgateway
components into the istio-system
namespace for exclusive use by Istio.
List the resources created in the
istio-system
namespace and the related pod information.ssh ocne-control-01 "kubectl get deployments,pods -n istio-system"
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 1/1 1 1 17m deployment.apps/istio-egressgateway 2/2 2 2 17m deployment.apps/istio-ingressgateway 2/2 2 2 17m deployment.apps/istiod 2/2 2 2 17m deployment.apps/prometheus-server 1/1 1 1 17m NAME READY STATUS RESTARTS AGE pod/grafana-67f4b94665-wtk6w 1/1 Running 0 17m pod/istio-egressgateway-79c58b7b6d-n9bkf 1/1 Running 0 16m pod/istio-egressgateway-79c58b7b6d-xh7vk 1/1 Running 0 17m pod/istio-ingressgateway-67cfb76cdb-87dkh 1/1 Running 0 16m pod/istio-ingressgateway-67cfb76cdb-jhv56 1/1 Running 0 17m pod/istiod-64c96d75b6-8tzm8 1/1 Running 0 17m pod/istiod-64c96d75b6-gb65q 1/1 Running 0 16m pod/prometheus-server-64469994dc-c9lg8 2/2 Running 0 17m
List the services in the
istio-system
namespace.ssh ocne-control-01 "kubectl get services -n istio-system"
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.105.133.5 <none> 3000/TCP 19m istio-egressgateway ClusterIP 10.106.32.232 <none> 80/TCP,443/TCP,15443/TCP 19m istio-ingressgateway LoadBalancer 10.106.165.33 10.0.0.59 15021:30466/TCP,80:30302/TCP,443:30633/TCP,15012:32766/TCP,15443:31690/TCP 19m istiod ClusterIP 10.104.186.105 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 19m prometheus-server ClusterIP 10.99.210.8 <none> 9090/TCP 19m
Notice that the
istio-ingressgateway
is aLoadBalancer
Service type. These Service types cause the oci-ccm module to generate a private Load balancer based on the custom configuration we added to the Istio deployment.
Create an NGINX Deployment
Create a new
hello-world
deployment that runs thenginx
image.ssh ocne-control-01 "kubectl create deployment --image ghcr.io/oracle/oraclelinux8-nginx:1.20 hello-world"
View the Kubernetes Pods
List the Pods in the
default
namespace.ssh ocne-control-01 "kubectl get pods"
Example Output:
NAME READY STATUS RESTARTS AGE hello-world-85678f8458-4qdzs 1/1 Running 0 94s
Note that the
READY
column contains1/1
confirming that Istio automatic proxy sidecar injection is not enabled.
Delete the NGINX Deployment
Delete the deployment.
ssh ocne-control-01 "kubectl delete deployments hello-world"
Enable Istio Automatic Proxy Sidecar Injection
Create the Istio sidecar injection label for the
default
namespace.ssh ocne-control-01 "kubectl label namespace default istio-injection=enabled"
The output shows the
namespace/default labeled
.Confirm the
ISTIO-INJECTION
column shows a value ofenabled
.ssh ocne-control-01 "kubectl get namespace -L istio-injection"
Example Output:
NAME STATUS AGE ISTIO-INJECTION default Active 25m enabled externalip-validation-system Active 23m istio-system Active 7m33s kube-node-lease Active 25m kube-public Active 25m kube-system Active 25m kubernetes-dashboard Active 22m ocne-modules Active 22m
Re-create the NGINX Deployment
Create the
hello-world
deployment that runs thenginx
image.ssh ocne-control-01 "kubectl create deployment --image ghcr.io/oracle/oraclelinux8-nginx:1.20 hello-world"
Confirm the Deployment Includes an Istio Sidecar
List the pods.
ssh ocne-control-01 "kubectl get pods"
Example Output:
NAME READY STATUS RESTARTS AGE hello-world-85678f8458-s24zz 2/2 Running 0 105s
Note that the
READY
column contains2/2
, indicating that Istio automatic proxy sidecar injection is enabled and that the pods in the service mesh are running an Istio sidecar proxy.Get the Pod details showing that an
istio-proxy
container deploys alongside the application.ssh ocne-control-01 "kubectl describe pods <insert-pod-name-from-get-pods-command-here>"
The resultant output confirms that the deployment automatically deploys an Istio sidecar alongside it.
Example Output (Excerpt):
... ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 82s default-scheduler Successfully assigned default/hello-world-79654ff945-sn9cc to ocne-worker-04 Normal Pulled 82s kubelet Container image "container-registry.oracle.com/olcne/proxyv2:1.19.5" already present on machine Normal Created 82s kubelet Created container istio-init Normal Started 82s kubelet Started container istio-init Normal Pulling 80s kubelet Pulling image "container-registry.oracle.com/olcne/nginx:1.17.7" Normal Pulled 76s kubelet Successfully pulled image "container-registry.oracle.com/olcne/nginx:1.17.7" in 4.921s (4.921s including waiting) Normal Created 76s kubelet Created container nginx Normal Started 76s kubelet Started container nginx Normal Pulled 76s kubelet Container image "container-registry.oracle.com/olcne/proxyv2:1.19.5" already present on machine Normal Created 75s kubelet Created container istio-proxy Normal Started 75s kubelet Started container istio-proxy ...
Disable Istio Automatic Proxy Sidecar Injection
Remove the Istio sidecar injection in the
default
namespace.ssh ocne-control-01 "kubectl label namespace default istio-injection-"
The output shows the
namespace/default unlabeled
.Confirm the removal of the label from the
default
namespace.ssh ocne-control-01 "kubectl get namespace -L istio-injection"
Example Output:
NAME STATUS AGE ISTIO-INJECTION default Active 32m externalip-validation-system Active 30m istio-system Active 14m kube-node-lease Active 32m kube-public Active 32m kube-system Active 32m kubernetes-dashboard Active 29m ocne-modules Active 30m
Delete the NGINX deployment.
ssh ocne-control-01 "kubectl delete deployments hello-world"
Summary
With the clean-up done, the demonstration detailing turning on and off Istio sidecar injection within Kubernetes namespaces in an Oracle Cloud Native Environment cluster is complete.