Run Kubernetes on Oracle Linux
Introduction
Kubernetes is Greek for pilot or helmsman - in other words, the person who follows commands and steers a ship towards its ultimate goal (rather than being the Captain giving orders). To that end, Kubernetes is an open-source, extensible platform for deploying, managing, and scaling containerized applications. It achieves this by using several command-line tools. This lab uses one of those called kubectl
together with YAML files to define the required attributes for the organization deploying the application and understand how to set up and maintain the application once done deploying it.
All deployments onto a Kubernetes cluster get represented as objects. These deployed objects use text-based YAML files to provide details of the required state of any application deployed onto the cluster. These YAML files may describe the following:
- Which containerized applications to run on which nodes
- Details of the resources required by the application
- Any policies detailing how these applications maintain their state, such as restart policies, upgrade policies, etc.
This third point, although important, is complicated without understanding the basics. Therefore, we'll hold off for now and handle that topic in future tutorials.
This tutorial works with Kubernetes running within a compact Oracle Cloud Native Environment on Oracle Linux. The intent is not to be a 'one stop shop' for everything needed to administer a production deployment. Instead, it introduces the skills required to deploy a working sample application.
Objectives
- Examine the different Kubernetes components such as a Pod, Deployment, and Service
- Examine the different Kubernetes objects
- Deploy and Test a sample project
Prerequisites
An Oracle Linux 8 or later system with the following configuration:
- a non-root user with
sudo
privileges - Oracle Cloud Native Environment installed and configured
Set up Lab Environment
Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.
Information: The free lab environment deploys Oracle Cloud Native Environment on the provided node, ready for creating environments. This deployment takes approximately 8-10 minutes to finish after launch. Therefore, you might want to step away while this runs and then return to complete the lab.
If not already connected, open a terminal and connect via ssh to the ocne-node01 system.
ssh oracle@<ip_address_of_ol_node>
Confirm the environment is ready.
kubectl get pods -A
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE externalip-validation-system externalip-validation-webhook-7988bff847-8ws2v 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-qxqth 1/1 Running 0 3m18s kube-system coredns-7cbc77dbc7-r9bgj 1/1 Running 0 3m18s kube-system etcd-ocne-node01 1/1 Running 0 3m37s kube-system kube-apiserver-ocne-node01 1/1 Running 0 3m37s kube-system kube-controller-manager-ocne-node01 1/1 Running 0 3m37s kube-system kube-flannel-ds-vcwzn 1/1 Running 0 3m18s kube-system kube-proxy-7lx59 1/1 Running 0 3m18s kube-system kube-scheduler-ocne-node01 1/1 Running 0 3m37s kubernetes-dashboard kubernetes-dashboard-5d5d4947b5-7pffh 1/1 Running 0 3m18s
Create a Deployment on a Pod and Request Details
In Kubernetes, deployment is a technical term referring to a file that governs a pod's behavior and characteristics. Administrators use deployments to instruct the application on what to do, and Kubernetes performs the task to reach that state.
The examples use an image that contains a small nginx webserver that echos back the source IP of requests it receives through an HTTP header.
Create a deployment of echoserver.
kubectl create deployment test --image=k8s.gcr.io/echoserver:1.4
List all Pods in the cluster.
kubectl get pods
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE test-6c486b6d76-467p7 1/1 Running 0 53s
Note: The Pod name contains a suffix value that varies each time deploying the Pod.
Use JSONPath to assign the Pod name to a variable.
TESTPOD=$(kubectl get pods -o jsonpath='{ $.items[*].metadata.name }')
Test the variable assignment.
The
kubectl get pods
command also allows passing a pod name as a parameter to display only the information for that Pod.kubectl get pods $TESTPOD
Request selected information about the Pod.
kubectl get pod $TESTPOD --output custom-columns=NAME:metadata.name,NODE_IP:status.hostIP,POD_IP:status.podIP
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pod $TESTPOD --output custom-columns=NAME:metadata.name,NODE_IP:status.hostIP,POD_IP:status.podIP NAME NODE_IP POD_IP test-6c486b6d76-467p7 10.0.0.140 10.244.0.7
Get the Pod details.
kubectl describe pod $TESTPOD
Example Output:
[oracle@ocne-node01 ~]$ kubectl describe pod test-6c486b6d76-467p7 Name: test-6c486b6d76-467p7 Namespace: default Priority: 0 Node: ocne-node01/10.0.0.140 Start Time: Tue, 28 Jun 2022 19:21:27 +0000 Labels: app=test pod-template-hash=6c486b6d76 Annotations: <none> Status: Running IP: 10.244.0.7 IPs: IP: 10.244.0.7 Controlled By: ReplicaSet/test-6c486b6d76 Containers: echoserver: Container ID: cri-o://5b7866a27722ec0998cd9fe74945fb82b4dd9ed4c5c80671d9e8aa239c7008a4 Image: k8s.gcr.io/echoserver:1.4 Image ID: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb Port: <none> Host Port: <none> State: Running Started: Tue, 28 Jun 2022 19:21:30 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d67ph (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-d67ph: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m default-scheduler Successfully assigned default/test-6c486b6d76-467p7 to ocne-node01 Normal Pulling 21m kubelet Pulling image "k8s.gcr.io/echoserver:1.4" Normal Pulled 21m kubelet Successfully pulled image "k8s.gcr.io/echoserver:1.4" in 3.102843235s Normal Created 21m kubelet Created container echoserver Normal Started 21m kubelet Started container echoserver
Create a Deployment with a YAML File
Using Kubernetes Deployment manifests define how to deploy an application to the Kubernetes cluster and provide access to other Kubernetes functionality such as self-healing, scalability, versioning, and rolling updates. This lab will not address this more complex functionality provided within Kubernetes. Instead, it illustrates how to use a very basic manifest file to deploy an application.
A Deployment manifest file gets written in either JSON or YAML. Although possible to use JSON, YAML is far more popular due to its flexibility, readability, and the ability to include descriptive comments to clarify aspects of the final deployment.
When running a Deployment, a Pod is updated through a series of declarative updates to reach the desired state for the running application.
While all details in the deployment.yaml are essential for Kubernetes to be able to enact the deployment request, the following highlights some of the more vital parts:
- The apiVersion field specifies the Kubernetes API version to use. Set this to apps/v1 if using an up-to-date version of Kubernetes.
- In this instance, the kind field informs Kubernetes to refer to a type of object called Deployment.
- The metadata section is used to outline details of the Deployment name and associated labels
- The .spec section is probably the most critical section of any deployment manifest file. Anything from here on downwards relates to deploying the Pod. Anything below the .spec.template section describes the Pod template Kubernetes uses to manage the Deployment (in this example, it is a single container).
- Other fields not used in this example are the .spec.replicas field (which tells Kubernetes how many Pod replicas to deploy), and the .spec.strategy field (which tells Kubernetes how to perform updates to the Deployment).
See the upstream Deployments documentation for more information on these other fields.
Create the Deployment file.
cat << 'EOF' | tee mydeployment.yaml > /dev/null apiVersion: apps/v1 kind: Deployment metadata: name: echo1 spec: selector: matchLabels: app: echo1 template: metadata: labels: app: echo1 spec: containers: - name: echoserver image: k8s.gcr.io/echoserver:1.4 EOF
Deploy the application on a Pod using the Deployment manifest file.
kubectl apply -f mydeployment.yaml
Example Output:
[[oracle@ocne-node01 ~]$ kubectl apply -f mydeployment.yaml deployment.apps/echo1 created
List the Pod managed by the deployment.
kubectl get pods -l app=echo1
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -l app=echo1 NAME READY STATUS RESTARTS AGE echo1-7cbf6dfb96-4cgq7 1/1 Running 0 24s
- The
-l
or--selector=
option provides a selector (label query) to filter on. This option supports '=', '==', and '!='.(e.g.-l key1=value1,key2=value2
)
Note: Reminder that the Pod name contains a suffix value that varies each time deploying the Pod.
- The
Verify the Deployment succeeded.
kubectl get deploy echo1
Example Output:
[oracle@ocne-node01 ~]$ kubectl get deploy echo1 NAME READY UP-TO-DATE AVAILABLE AGE echo1 1/1 1 1 16m
- The
deploy
option is short-hand fordeployments
. Thekubectl
command allows using an abbreviated syntax for many of its options. More details are available by runningkubectl --help
.
- The
Return more detailed information for a deployment.
kubectl describe deploy echo1
Example Output:
[oracle@ocne-node01 ~]$ kubectl describe deploy echo1 Name: echo1 Namespace: default CreationTimestamp: Tue, 28 Jun 2022 20:20:40 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 Selector: app=echo1 Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=echo1 Containers: echoserver: Image: k8s.gcr.io/echoserver:1.4 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: echo1-7cbf6dfb96 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set echo1-7cbf6dfb96 to 1
Use ClusterIP Service
Despite successfully deploying the echo1 Deployment to a Pod, it's not much use if the end-users cannot access it internally or on the network. That access is where a Service comes in handy as it exposes a Deployment to the network.
The default Kubernetes service type is ClusterIP. However, you cannot access a ClusterIP service from the internet, but you can use the Kubernetes proxy. See the upstream documentation for more on Proxies .
This section exposes echo1 and creates inter-service communication within the cluster using an Oracle Linux Pod, demonstrating communication between your app's front-end and back-end components.
Get a list of nodes.
kubectl get nodes
Example Output:
[oracle@ocne-node01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION ocne-node01 Ready <none> 4h27m v1.22.8+1.el8
Nodes are the physical systems or virtual machines for deploying Pods.
Query kube-proxy mode.
Running kube-proxy in iptables mode causes packets sent to a ClusterIP Service never to be source NAT'd.
curl -w "\n" http://localhost:10249/proxyMode
- kube-proxy listens on port 10249 on the node where it's running.
Create the ClusterIP Service.
kubectl expose deployment echo1 --name=clusterip-service --port=80 --target-port=8080
Get the IP address assigned to the cluster.
kubectl get svc clusterip-service
Example Output:
[oracle@ocne-node01 ~]$ kubectl get svc clusterip-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip-service ClusterIP 10.108.107.54 <none> 80/TCP 13s
Take note of the
CLUSTER-IP
address in the output.Create a pod in the same cluster for accessing the ClusterIP Service.
kubectl run ol -it --image=oraclelinux:8 --restart=Never --rm
This command will create a Pod running an Oracle Linux 8 container in interactive mode and present a command prompt.
Example Output:
[oracle@ocne-node01 ~]$ kubectl run ol -it --image=oraclelinux:8 --restart=Never --rm If you don't see a command prompt, try pressing enter. [root@ol /]#
Get the IP address of the Oracle Linux container.
ip -br a
Example Output:
[root@ol /]# ip -br a lo UNKNOWN 127.0.0.1/8 ::1/128 eth0@if12 UP 10.244.0.9/24 fe80::146f:2cff:fe73:b528/64
Test the nginx webserver within echo1.
curl -w "\n" <CLUSTER-IP_ADDRESS>
Use the
CLUSTER-IP
address from the previous output.Example Output:
[root@ol /]# curl -w "\n" 10.108.107.54 CLIENT VALUES: client_address=10.244.0.9 command=GET real path=/ query=nil request_version=1.1 request_uri=http://10.108.107.54:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.108.107.54 user-agent=curl/7.61.1 BODY: -no body in request-
The output shows the request from Oracle Linux and handled by the ClusterIP Service using the echo1 Deployment.
Exit the container.
exit
Example Output:
[root@ol /]# exit exit pod "ol" deleted
Use NodePort Service with a YAML File
Previously the echo1 Deployment was exposed using the kubectl expose
command and accessed internally using the ClusterIP. Now, we'll use a NodePort Service which is a developer's approach to having echo1 accessible externally over the network.
The NodePort Service opens a specific port on all the nodes and forwards any traffic to that port to the Service.
Note Standard practice does not recommend using NodePort for Production systems for several reasons, principally the following:
- Each Service deployed requires a different port
- Nodes need to be publically available - which is definitely not recommended.
- No load-balancing occurs across the nodes (in a multi-node Kubernetes cluster)
Define a Service file.
cat << 'EOF' | tee myservice.yaml > /dev/null apiVersion: v1 kind: Service metadata: name: echo1-nodeport namespace: default spec: ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 32387 port: 80 protocol: TCP targetPort: 8080 selector: app: echo1 sessionAffinity: None type: NodePort status: loadBalancer: {} EOF
- type: - Makes the Service available to network requests from external clients. Valid values include: nodePort, LooadBalancer.
- nodePort: - The external port used to access the Service.
- port: - The port number exposed within the cluster.
- targetPort: - The Port that the container is listening on.
Create the Service.
kubectl apply -f myservice.yaml
Example Output:
[oracle@ocne-node01 ~]$ kubectl apply -f myservice.yaml service/echo1 created
Note: It is common to have the Deployment and Service definitions within the same YAML file to simplify the management of an application. Using separate files in these steps is for training purposes only. When combining them into a single file, use the
---
YAML syntax to separate them.Display how Kubernetes stores the newly created Service.
kubectl get service echo1-nodeport -o yaml
Example Output:
[oracle@ocne-node01 ~]$ kubectl get service echo1-nodeport -o yaml apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"echo1-nodeport","namespace":"default"},"spec":{"ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"nodePort":32387,"port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"echo1"},"sessionAffinity":"None","type":"NodePort"},"status":{"loadBalancer":{}}} creationTimestamp: "2022-06-29T00:14:30Z" name: echo1-nodeport namespace: default resourceVersion: "6242" uid: 3171dda6-05b8-45b8-a0ba-457eab6e4f71 spec: clusterIP: 10.100.17.53 clusterIPs: - 10.100.17.53 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 32387 port: 80 protocol: TCP targetPort: 8080 selector: app: echo1 sessionAffinity: None type: NodePort status: loadBalancer: {}
Describe the Pods service.
kubectl describe svc echo1-nodeport
Example Output:
[oracle@ocne-node01 ~]$ kubectl describe svc echo1-nodeport Name: echo1-nodeport Namespace: default Labels: <none> Annotations: <none> Selector: app=echo1 Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.17.53 IPs: 10.100.17.53 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 32387/TCP Endpoints: 10.244.0.7:8080 Session Affinity: None External Traffic Policy: Cluster Events: <none>
Get the object Endpoints.
The Endpoints track the IP addresses of the Pods to which the service sends traffic.
kubectl get endpoints echo1-nodeport
Example Output:
[oracle@ocne-node01 ~]$ kubectl get endpoints echo1-nodeport NAME ENDPOINTS AGE echo1-nodeport 10.244.0.7:8080 8m39s
List the Pods running the application.
kubectl get pods --output=wide
Example Output:
[oracle@ocne-node01 ~]$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES echo1-7cbf6dfb96-mlds4 1/1 Running 0 80m 10.244.0.7 ocne-node01 <none> <none> test-6c486b6d76-v4htj 1/1 Running 0 83m 10.244.0.6 ocne-node01 <none> <none>
The IP address in this listing for echo1 should match the value shown in the previous step for Endpoints, which is the Pod's IP address running on the specified node.
List the Services.
kubectl get svc -o wide
Example Output:
[oracle@ocne-node01 ~]$ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR clusterip-service ClusterIP 10.107.31.75 <none> 80/TCP 78m app=echo1 echo1-nodeport NodePort 10.100.17.53 <none> 80:32387/TCP 10m app=echo1 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 88m <none>
This command used the alternate option of
-o wide
, rather than--output=wide
.Take note of the NodePort, which is set to 32387 for the echo1-nodeport Service.
Get the IP address of the node.
The free lab environment runs on a single node ocne-node01.
ip -br a
In the free lab environment, the IP address should return the instance's private IP address of 10.0.0.140 assigned to interface ens3.
Use JSONPath to assign the NodePort a variable.
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services echo1-nodeport)
Use JSONPath to assign the Node IP to a variable.
NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }')
Create a firewall rule.
This rule allows traffic on the
node:nodeport
, wherenode
is the host IP address of the system or virtual machine where the Pod is running.sudo firewall-cmd --permanent --add-port=$NODEPORT/tcp sudo firewall-cmd --reload
After the
--reload
, the firewalld daemon reloads it's configuration, which includesiptables
. As kube-proxy depends oniptables
, there will be a delay in the response from Services.Use node address and node port to verify application.
curl -s $NODES:$NODEPORT
Example Output:
[oracle@ocne-node01 ~]$ curl -s $NODES:NODEPORT CLIENT VALUES: client_address=10.244.0.1 command=GET real path=/ query=nil request_version=1.1 request_uri=http://10.0.0.140:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.0.0.140:32387 user-agent=curl/7.61.1 BODY:
The output shows the request from the local node routing through the NodePort Service, through kube-proxy, and to the Pod running the echo1 Deployment.
Note: If the output appears to hang, this is due to the previous reload of the firewall. Type
Ctrl-C
and try again.
Remove Deployments and Services
Once done with a Service or Deployment, remove them from Kubernetes.
Remove Services.
kubectl delete svc clusterip-service echo1-nodeport
Remove Deployments.
kubectl delete deployments echo1 kubectl delete deploy test
Removing objects can be done individually or in groups. Check the Kubernetes Reference Manual for more information.
Summary
This lab provides only the briefest of introductions to the functionality that using a Cloud-Native Orchestrator like Kubernetes delivers to any organization using Kubernetes to manage their Container deployments. These exercises provide the first step on what will most likely be a long journey into the flexibility that using Kubernetes can deliver.