Run Kubernetes on Oracle Linux
Introduction
Kubernetes is Greek for pilot or helmsman - in other words, the person who follows commands and steers a ship towards its ultimate goal (rather than being the Captain giving orders). To that end, Kubernetes is an open-source, extensible platform for deploying, managing, and scaling containerized applications. It achieves this by using several command-line tools. This tutorial uses one of those called kubectl
together with YAML files to define the required attributes for the organization deploying the application and understand how to set up and maintain the application once done deploying it.
All deployments onto a Kubernetes cluster get represented as objects. These deployed objects use text-based YAML files to provide details of the required state of any application deployed onto the cluster. These YAML files may describe the following:
- Which containerized applications to run on which nodes
- Details of the resources required by the application
- Any policies detailing how these applications maintain their state, such as restart policies, upgrade policies, etc.
Although essential, this third point is complicated without understanding the basics. Therefore, we'll hold off on discussing it and handle it in future tutorials.
This tutorial works with Kubernetes within an Oracle Cloud Native Environment on Oracle Linux. Its intent is not to be a 'one-stop shop' for everything needed to administer a production deployment but to introduce the necessary skills to deploy a working sample application successfully.
Objectives
- Examine the different Kubernetes components such as a Pod, Deployment, and Service
- Examine the different Kubernetes objects
- Deploy and Test a sample project
Prerequisites
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne2
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e install_ocne_rpm=true -e create_ocne_cluster=true -e "ocne_cluster_node_options='-n 1 -w 1'"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
Open a terminal and connect via SSH to the ocne instance.
ssh oracle@<ip_address_of_node>
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a
Ready
state along with their current Kubernetes version.
Create a Deployment on a Pod and Request Details
In Kubernetes, Deployment is a technical term referring to a file that governs a pod's behavior and characteristics. Administrators use deployments to instruct the application on what to do, and Kubernetes performs the task to reach that state.
The examples use an image that contains a small nginx webserver that echos back the source IP of requests it receives through an HTTP header.
Create a deployment of echoserver.
kubectl create deployment test --image=k8s.gcr.io/echoserver:1.4
List all Pods in the cluster.
kubectl get pods
Example Output:
NAME READY STATUS RESTARTS AGE test-6c486b6d76-467p7 1/1 Running 0 53s
Note: The Pod name contains a suffix value that varies when deploying the Pod.
Use JSONPath to assign the Pod name to a variable.
TESTPOD=$(kubectl get pods -o jsonpath='{ $.items[*].metadata.name }')
Test the variable assignment.
The
kubectl get pods
command also allows for passing a pod name as a parameter to display only that Pod's information.kubectl get pods $TESTPOD
Request selected information about the Pod.
kubectl get pod $TESTPOD --output custom-columns=NAME:metadata.name,NODE_IP:status.hostIP,POD_IP:status.podIP
Example Output:
NAME NODE_IP POD_IP test-6c486b6d76-467p7 10.0.0.140 10.244.0.7
Get the Pod details.
kubectl describe pod $TESTPOD
Example Output:
Name: test-6c486b6d76-467p7 Namespace: default Priority: 0 Node: ocne-node01/10.0.0.140 Start Time: Tue, 28 Jun 2022 19:21:27 +0000 Labels: app=test pod-template-hash=6c486b6d76 Annotations: <none> Status: Running IP: 10.244.0.7 IPs: IP: 10.244.0.7 Controlled By: ReplicaSet/test-6c486b6d76 Containers: echoserver: Container ID: cri-o://5b7866a27722ec0998cd9fe74945fb82b4dd9ed4c5c80671d9e8aa239c7008a4 Image: k8s.gcr.io/echoserver:1.4 Image ID: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb Port: <none> Host Port: <none> State: Running Started: Tue, 28 Jun 2022 19:21:30 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d67ph (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-d67ph: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m default-scheduler Successfully assigned default/test-6c486b6d76-467p7 to ocne-node01 Normal Pulling 21m kubelet Pulling image "k8s.gcr.io/echoserver:1.4" Normal Pulled 21m kubelet Successfully pulled image "k8s.gcr.io/echoserver:1.4" in 3.102843235s Normal Created 21m kubelet Created container echoserver Normal Started 21m kubelet Started container echoserver
Create a Deployment with a YAML File
Using Kubernetes Deployment manifests define how to deploy an application to the Kubernetes cluster and provide access to other Kubernetes functionality such as self-healing, scalability, versioning, and rolling updates. This lab will not address this more complex functionality provided within Kubernetes. Instead, it illustrates how to use a very basic manifest file to deploy an application.
Developers write a Deployment manifest file in either JSON or YAML. Although JSON can be used, YAML is far more popular due to its flexibility, readability, and ability to include descriptive comments to clarify aspects of the final deployment.
When running a Deployment, a Pod is updated through a series of declarative updates to reach the desired state for the running application.
While all details in the deployment.yaml are essential for Kubernetes to be able to enact the deployment request, the following highlights some of the more vital parts:
- The apiVersion field specifies the Kubernetes API version to use. Set this to apps/v1 if using an up-to-date version of Kubernetes.
- In this instance, the kind field informs Kubernetes to refer to a type of object called Deployment.
- The metadata section is used to outline details of the Deployment name and associated labels
- The .spec section is the most critical section of any Deployment manifest file. Anything from here on downwards relates to deploying the Pod. Anything below the .spec.template section describes the Pod template Kubernetes uses to manage the Deployment (in this example, it is a single container).
- Other fields not used in this example are the .spec.replicas field (which tells Kubernetes how many Pod replicas to deploy), and the .spec.strategy field (which tells Kubernetes how to update the Deployment).
See the upstream Deployments documentation for more information on these other fields.
Create the Deployment file.
cat << 'EOF' | tee mydeployment.yaml > /dev/null apiVersion: apps/v1 kind: Deployment metadata: name: echo1 spec: selector: matchLabels: app: echo1 template: metadata: labels: app: echo1 spec: containers: - name: echoserver image: k8s.gcr.io/echoserver:1.4 EOF
Deploy the application on a Pod using the Deployment manifest file.
kubectl apply -f mydeployment.yaml
List the Pod managed by the Deployment.
kubectl get pods -l app=echo1
Example Output:
NAME READY STATUS RESTARTS AGE echo1-7cbf6dfb96-4cgq7 1/1 Running 0 24s
- The
-l
or--selector=
option provides a selector (label query) to filter on. This option supports '=', '==', and '!='.(e.g.-l key1=value1,key2=value2
)
Note: Reminder that the Pod name contains a suffix value that varies when deploying the Pod.
- The
Verify the Deployment succeeded.
kubectl get deploy echo1
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE echo1 1/1 1 1 16m
- The
deploy
option is short-hand fordeployments
. Thekubectl
command allows using an abbreviated syntax for many of its options. More details are available by runningkubectl --help
.
- The
Return more detailed information for a deployment.
kubectl describe deploy echo1
Example Output:
Name: echo1 Namespace: default CreationTimestamp: Tue, 28 Jun 2022 20:20:40 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 Selector: app=echo1 Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=echo1 Containers: echoserver: Image: k8s.gcr.io/echoserver:1.4 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: echo1-7cbf6dfb96 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set echo1-7cbf6dfb96 to 1
Use ClusterIP Service
Despite successfully deploying the echo1 Deployment to a Pod, it's only useful if the end-users can access it internally or on the network. A Service that exposes a Deployment to the network is handy for that access.
The default Kubernetes Service type is ClusterIP. However, you cannot access a ClusterIP service from the internet, but you can use the Kubernetes proxy. See the upstream documentation for more on Proxies .
This section exposes echo1 and creates inter-service communication within the cluster using an Oracle Linux Pod, demonstrating communication between your app's front-end and back-end components.
Get a list of nodes.
kubectl get nodes -o wide
Nodes are the physical systems or virtual machines used to deploy Pods.
Query kube-proxy mode.
Running kube-proxy in iptables mode causes packets sent to a ClusterIP Service never to be source NAT'd.
CP_IP=$(kubectl get nodes -o wide | grep control | awk '{ print $6 }')
curl -w "\n" http://$CP_IP:10249/proxyMode
- kube-proxy listens on port 10249 on the node where it's running.
Create the ClusterIP Service.
kubectl expose deployment echo1 --name=clusterip-service --port=80 --target-port=8080
Show the IP address assigned to the cluster.
kubectl get svc clusterip-service
Assign the
CLUSTER-IP
to a variable.CLUSTER_IP=$(kubectl get svc clusterip-service | awk 'FNR == 2 {print $3}')
Create a pod in the same cluster to access the ClusterIP Service.
kubectl run ol -it --image=oraclelinux:8 --restart=Never --rm -- curl -w "\n" $CLUSTER_IP
This command creates a Pod running an Oracle Linux container, runs cURL, and then removes the Pod.
Example Output:
[root@ol /]# curl -w "\n" 10.108.107.54 CLIENT VALUES: client_address=10.244.0.9 command=GET real path=/ query=nil request_version=1.1 request_uri=http://10.108.107.54:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.108.107.54 user-agent=curl/7.61.1 BODY: -no body in request-
The output shows a successful request from the Oracle Linux Pod for the echo1 Deployment using the ClusterIP Service.
Use NodePort Service with a YAML File
Previously, the echo1 Deployment was exposed using the kubectl expose
command and was accessed internally using the ClusterIP. Now, we'll use a NodePort Service, which is a developer's approach to having echo1 accessible externally over the network.
The NodePort Service opens a specific port on all the nodes and forwards any traffic to that port to the Service.
Note Standard practice does not recommend using NodePort for Production systems for several reasons, principally the following:
- Each Service deployed requires a different port
- Nodes need to be publically available - which is definitely not recommended.
- No load-balancing occurs across the nodes (in a multi-node Kubernetes cluster)
Define a Service file.
cat << 'EOF' | tee myservice.yaml > /dev/null apiVersion: v1 kind: Service metadata: name: echo1-nodeport namespace: default spec: ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 32387 port: 80 protocol: TCP targetPort: 8080 selector: app: echo1 sessionAffinity: None type: NodePort status: loadBalancer: {} EOF
- type: - Makes the Service available to network requests from external clients. Valid values include: nodePort, LooadBalancer.
- nodePort: - The external port to access the Service.
- port: - The port number exposed within the cluster.
- targetPort: - The Port on which the container listens.
Create the Service.
kubectl apply -f myservice.yaml
Note: Having the Deployment and Service definitions within the same YAML file is common practice to simplify application management. Using separate files in these steps is for training purposes only. When combining them into a single file, use the
---
YAML syntax to separate them.Display how Kubernetes stores the newly created Service.
kubectl get service echo1-nodeport -o yaml
Example Output:
apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo1-nodeport","namespace":"default"},"spec":{"ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"nodePort":32387,"port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"echo1"},"sessionAffinity":"None","type":"NodePort"},"status":{"loadBalancer":{}}} creationTimestamp: "2022-06-29T00:14:30Z" name: echo1-nodeport namespace: default resourceVersion: "6242" uid: 3171dda6-05b8-45b8-a0ba-457eab6e4f71 spec: clusterIP: 10.100.17.53 clusterIPs: - 10.100.17.53 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 32387 port: 80 protocol: TCP targetPort: 8080 selector: app: echo1 sessionAffinity: None type: NodePort status: loadBalancer: {}
Describe the Pods service.
kubectl describe svc echo1-nodeport
Example Output:
Name: echo1-nodeport Namespace: default Labels: <none> Annotations: <none> Selector: app=echo1 Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.17.53 IPs: 10.100.17.53 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 32387/TCP Endpoints: 10.244.0.7:8080 Session Affinity: None External Traffic Policy: Cluster Events: <none>
Get the object Endpoints.
The Endpoints track the IP addresses of the Pods to which the Service sends traffic.
kubectl get endpoints echo1-nodeport
Example Output:
NAME ENDPOINTS AGE echo1-nodeport 10.244.0.7:8080 8m39s
List the Pods running the application.
kubectl get pods --output=wide
Example Output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES echo1-7cbf6dfb96-mlds4 1/1 Running 0 80m 10.244.0.7 ocne-node01 <none> <none> test-6c486b6d76-v4htj 1/1 Running 0 83m 10.244.0.6 ocne-node01 <none> <none>
The IP address in this listing for echo1 should match the value shown in the previous step for Endpoints, which is the Pod's IP address running on the specified node.
List the Services.
kubectl get svc -o wide
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR clusterip-service ClusterIP 10.107.31.75 <none> 80/TCP 78m app=echo1 echo1-nodeport NodePort 10.100.17.53 <none> 80:32387/TCP 10m app=echo1 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 88m <none>
This command used the alternate option of
-o wide
rather than--output=wide
. The echo1-nodeport Service defines and sets the NodePort to a value of 32387, as shown in the output.Use JSONPath to assign the NodePort a variable.
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services echo1-nodeport)
Use JSON and
jq
to assign the Node IP to a variable.NODES=$(kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("echo1-") ).status.hostIP')
Note: Regular expressions with JSONPath are not supported. If you want to match using regular expressions, you can use a tool such as
jq
.Use the node address and node port to verify the application.
curl -s $NODES:$NODEPORT
Example Output:
CLIENT VALUES: client_address=10.244.0.1 command=GET real path=/ query=nil request_version=1.1 request_uri=http://10.0.0.140:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.0.0.140:32387 user-agent=curl/7.61.1 BODY:
The output shows the request from the local node routing through the NodePort Service, through kube-proxy, and to the Pod running the echo1 Deployment.
Remove Deployments and Services
Once done with a Service or Deployment, remove them from Kubernetes.
Remove Services.
kubectl delete svc clusterip-service echo1-nodeport
Remove Deployments.
kubectl delete deployments echo1 kubectl delete deploy test
Objects can be removed individually or in groups. For more information, check the Kubernetes Reference Manual .
Next Steps
Through this tutorial's steps, you get a brief introduction to the functionality that a Cloud-Native Orchestrator such as Kubernetes delivers to help an organization manage their Container deployments. These exercises provide the first step on what will likely be a long journey into the flexibility that Kubernetes can deliver.