Use and Configure CoreDNS with Oracle Cloud Native Environment
Introduction
Dynamic Name System (DNS) provides a way to translate hostnames to IP addresses for systems located anywhere on your network or the Internet. CoreDNS provides the same DNS service within your Kubernetes cluster to ensure that all deployments on your Kubernetes cluster have a reliable communication mechanism between the pods and services it uses. CoreDNS resolves requests for hostnames to IP addresses within the Oracle Cloud Native Environment cluster.
Objectives
In this tutorial, you will learn:
- How to configure and use CoreDNS
- Where to locate the CoreDNS configuration files and how to alter them
Prerequisites
- Installation of Oracle Cloud Native Environment
- a single control node and single worker node
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/ocne2
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e install_ocne_rpm=true -e create_ocne_cluster=true -e "ocne_cluster_node_options='-n 1 -w 1'"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
Open a terminal and connect via SSH to the ocne instance.
ssh oracle@<ip_address_of_node>
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a Ready state and their current Kubernetes version.
How To Configure CoreDNS
Knowing how to configure CoreDNS and how to change that configuration helps you understand how DNS works within your Kubernetes cluster.
IMPORTANT: Part of the changes we'll make in this tutorial is to modify the kube-dns Service provided by CoreDNS. The kube-dns Service spec.clusterIP field is immutable. Kubernetes protects this field to prevent changes that may disrupt a working cluster. You must remove and add this resource if you need to change an immutable field, thus risking an outage in the cluster.
Change the CIDR Block in the ClusterConfiguration.
The ClusterConfiguration includes various options that affect the configuration of individual components, such as kube-apiserver, kube-scheduler, kube-controller-manager, CoreDNS, etcd, and kube-proxy. You'll need to reflect changes to the configuration of node components manually. Updating a file in /etc/kubernetes/manifests informs the kubelet to restart the static Pod for the corresponding component. Kubernetes documentation recommends making these changes one node at a time to leave the cluster without downtime.
Review the currently assigned CIDR block.
kubectl get pods -n kube-system kube-apiserver-ocne-control-plane-1 -o yaml | grep service-cluster-ip-range
Example Output:
- --service-cluster-ip-range=10.96.0.0/12
Connect to the console for the control plane node.
ocne cluster console --node ocne-control-plane-1
Update the CIDR range.
sed -i "s/10.96.0.0/100.96.0.0/g" /hostroot/etc/kubernetes/manifests/kube-apiserver.yaml
Updating the file causes the API server to restart automatically and will disconnect you from the console. The restart of the API server can take 2-3 minutes to become available for any new
kubectl
commands. You can monitor when the API server becomes available withwatch kubectl get nodes
. If you wish to verify the changes, you can connect again to the console andcat
the file, ensuring it shows as 100.96.0.0/12.
Update the Cluster DNS Service's IP address
Confirm the current IP address used by the cluster DNS Service.
kubectl -n kube-system get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 1h12m
Note: If you see a message similar to The connection to the server 10.0.0.52:6443 was refused - did you specify the right host or port?, it means that the API Server has not restarted yet. Retry the command again until it succeeds.
Get the complete resource spec for the Service.
kubectl -n kube-system get svc kube-dns -o yaml > kube-dns-svc.yaml
Create a patch file containing the changes.
cat << EOF | tee patch-kube-dns-svc.yaml > /dev/null spec: clusterIP: 100.96.0.10 clusterIPs: - 100.96.0.10 EOF
Apply the patch to the local spec file.
kubectl patch -f kube-dns-svc.yaml --local=true --patch-file patch-kube-dns-svc.yaml -o yaml | shuf --output=kube-dns-svc.yaml --random-source=/dev/zero
Force replace the Service.
This action causes
kubectl
to remove and then re-create the Service.kubectl replace --force -f kube-dns-svc.yaml
Confirm the new IP address is in use.
kubectl -n kube-system get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m11s
Apply Configuration Changes
Update Kubelet Configuration
Kubelet is essential in the Kubernetes framework and in managing and coordinating pods and nodes. Its features include pod deployment, resource management, and health monitoring, contributing considerably to a Kubernetes cluster’s operational stability. Kubelet ensures seamless coordination and efficient resource allocation by supporting effective communication between the control plane and nodes, constantly monitoring the containers, and engaging in automated recovery to improve cluster resilience.
Get the current Kubelet YAML definition for the clusterDNS.
kubectl -n kube-system get configmap kubelet-config -o yaml | grep -B 1 10.96
Change the value for clusterDNS and replace the Kubelet ConfigMap.
kubectl -n kube-system get cm kubelet-config -o yaml | sed "s/10.96.0.10/100.96.0.10/g" | kubectl replace -f -
Verify the Kubelet ConfigMap changes for the Kubelet configuration.
kubectl -n kube-system get configmap kubelet-config -o yaml | grep -B 1 100.96
Update Cluster Configuration
Update and replace the Kubeadm ConfigMap.
kubectl -n kube-system get cm kubeadm-config -o yaml | sed "s/10.96.0.0/100.96.0.0/g" | kubectl replace -f -
Verify the Kubeadm ConfigMap changes for the Kubeadm configuration.
kubectl -n kube-system get configmap kubeadm-config -o yaml | grep -B 3 100.96
Reload the Kubelet Daemon and Restart the Kubelet Service
The Kubelet process executes as a daemon on this node. Reload the configuration so it takes effect.
Connect to the console for the control plane node.
ocne cluster console -d --node ocne-control-plane-1
The
-d
flag connects you directly to the node, not the chrooted filesystem. This connection type is required so thatkubeadm
can find the configuration files in their expected location.Update the Kubelet Service.
kubeadm upgrade node phase kubelet-config
Example Output:
[upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3792300054/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Restart the Daemon and Kubelet Services.
systemctl daemon-reload; systemctl restart kubelet
Restarting the kubelet service disconnects you from the console.
Connect to the console for each additional node.
ocne cluster console -d --node ocne-worker-1
Update the Kubelet configuration on each node.
sed -i "s/10.96.0.10/100.96.0.10/g" /var/lib/kubelet/config.yaml
Restart the Kubelet Service on each node.
systemctl daemon-reload; systemctl restart kubelet
Confirm that the Configuration Change is Working
You have updated the worker nodes to use the new CoreDNS network CIDR definition. Next, you will confirm that a new deployment returns the correct DNS IP address and can resolve an external website.
Deploy a new Pod.
kubectl run netshoot --image=docker.io/nicolaka/netshoot --command sleep --command "3600"
Check the status of the pod deployment.
kubectl get pods
Keep checking until the netshoot pod reports as Running status.
Confirm local DNS works.
kubectl exec -it netshoot -- nslookup kubernetes.default
Example Output:
Server: 100.96.0.10 Address: 100.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1
NOTE: If the output reports error: unable to upgrade connection: container not found ("netshoot"), it means the netshoot container is still deploying. Retry the command until it works.
The kubernetes.default.svc.cluster.local still reports its address in the 10.96.0.0/12 range. This address retention is because the Service keeps its existing ClusterIP until you delete and recreate the Service.
Confirm the Kubernetes Cluster DNS configuration is updated.
kubectl exec -it netshoot -- cat /etc/resolv.conf
Example Output:
search default.svc.cluster.local svc.cluster.local cluster.local vcn.oraclevcn.com lv.vcn.oraclevcn.com nameserver 100.96.0.10 options ndots:5
Confirm external DNS lookup works.
kubectl exec -it netshoot -- nslookup example.com
Example Output:
Server: 100.96.0.10 Address: 100.96.0.10#53 Non-authoritative answer: Name: example.com Address: 93.184.215.14 Name: example.com Address: 2606:2800:21f:cb07:6820:80da:af6b:8b2c
Troubleshooting Strategies
If the CoreDNS service is not working as expected, the following steps will help to identify the underlying problem. If you need to update anything, wait for the CoreDNS process to restart, and then repeat the steps in the last section to confirm the Kubernetes DNS service is running.
Check the CoreDNS logs for any errors.
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
Example Output (showing healthy logs):
.:53 [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 CoreDNS-1.11.1 linux/amd64, go1.21.7, ae2bbc2 .:53 [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 CoreDNS-1.11.1 linux/amd64, go1.21.7, ae2bbc2
Confirm the Kubernetes DNS service is running.
kubectl get service --namespace=kube-system
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 1h
Verify the DNS endpoints are exposed.
A Kubernetes service references an endpoint resource so that the service has a record of the internal IPs of Pods with which to communicate. Endpoints consist of an IP address and port (one pair per Pod) that the service manages itself, but you can manage them manually if necessary.
kubectl get endpoints kube-dns --namespace=kube-system
Example Output:
NAME ENDPOINTS AGE kube-dns 10.244.1.2:53,10.244.1.3:53,10.244.1.2:53 + 3 more... 1h1m
Information: An endpoint is the dynamically assigned IP address and port defined with a Service deployment (one endpoint per pod the service routes traffic to). If no endpoints are output, check out the debugging Services documentation.
Get the current ClusterRole.
CoreDNS must be able to properly list the service and endpoint resources to resolve names correctly.
kubectl describe clusterrole system:coredns -n kube-system
Example Output:
Name: system:coredns Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- endpoints [] [] [list watch] namespaces [] [] [list watch] pods [] [] [list watch] services [] [] [list watch] endpointslices.discovery.k8s.io [] [] [list watch]
If any expected permission is missing, edit the ClusterRole to add them. The
kubectl edit clusterrole system:coredns -n kube-system
command opens the ClusterRole in an editor so you can add any missing permission(s).
Use CoreDNS
Oracle Cloud Native Environment automatically provisions the internal services Kubernetes uses so they are up and running when the cluster starts. The following steps illustrate how CoreDNS resolves requests so that deployments can work together as needed.
Scale CoreDNS
Confirm the number of CoreDNS Pods running.
kubectl get pods --namespace kube-system -l k8s-app=kube-dns
Example Output:
NAME READY STATUS RESTARTS AGE coredns-676b47d668-b8pw2 1/1 Running 0 1h19m coredns-676b47d668-xrmzf 1/1 Running 0 1h19m
This output shows two CoreDNS replica Pods defined in the default deployment. If you see no CoreDNS Pods running or the STATUS column does not report as Running, this indicates that CoreDNS is not running in your cluster.
View the CoreDNS Deployment.
kubectl -n kube-system get deploy
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 1h21m
Scale CoreDNS to three Pod replicas.
Because CoreDNS is a deployment, it can either scale up or down as required, allowing you, as the administrator, to ensure DNS resolution within the Kubernetes cluster remains performant.
kubectl -n kube-system scale deploy coredns --replicas 3
The command's output states the deployment scaled.
Requery the number of CoreDNS Pod Replicas running.
kubectl get pods --namespace kube-system -l k8s-app=kube-dns
Example Output:
NAME READY STATUS RESTARTS AGE coredns-676b47d668-57tzz 1/1 Running 0 5s coredns-676b47d668-b8pw2 1/1 Running 0 1h19m coredns-676b47d668-xrmzf 1/1 Running 0 1h19m
If the new Pod shows with a STATUS of ContainerCreating, repeat the command a few more times until the STATUS shows as Running. There are now three CoreDNS replica Pods running, demonstrating a simple way to boost the performance of DNS queries within your Kubernetes cluster.
How Pods Communicate using CoreDNS
Application Pods deployed into a Kubernetes cluster need to be able to communicate with each other. This section demonstrates how CoreDNS does this by showing how to access Pods in different namespaces. The following steps will deploy an Apache web server into the default namespace, create a new namespace, and then deploy an Nginx web server to the newly created namespace. You will then open a shell in the Nginx Pod and execute a command against the Apache Pod to show how CoreDNS uses hostnames to resolve requests.
Deploy an Apache web server to the default namespace.
kubectl create deploy apache --image docker.io/httpd
Expose the deployment.
kubectl expose deploy apache --name apache-svc --port 80
Confirm the Apache Pod is running.
kubectl -n default get pods
The output shows the newly deployed Apache web server and the previously deployed netshoot container.
Create a new namespace.
kubectl create namespace mytest
Confirm the namespace exists.
kubectl get namespace
Deploy a new Nginx web server into the namespace.
kubectl --namespace mytest create deploy nginx --image ghcr.io/oracle/oraclelinux9-nginx:1.20
Expose the deployment.
kubectl --namespace mytest expose deploy nginx --name nginx-svc --port 80
Confirm the Nginx Pod is running.
kubectl -n mytest get pods
The pod reports as Running.
Exec into the Nginx Container.
kubectl exec -it -n mytest $(kubectl -n mytest get pods | awk 'FNR == 2 {print $1}') -- /bin/bash
We use
awk
to get the name of the nginx Pod.Use cURL to access the previously deployed Apache web server deployment.
curl apache-svc
The output shows curl: (6) Could not resolve host: apache-svc. Does this mean that CoreDNS is not working correctly? No, the actual issue is that CoreDNS looks in the deployment's namespace by default, which is mytest in this specific case. If you want to access a service using a different namespace, include the deployment's namespace in the cURL request.
Retry the request with a properly formed service name.
curl apache-svc.default
The output displays the It works HTML page. This request worked because you deployed an Apache web server (apache-svc) into the default namespace. Therefore, you needed to add the .default to the cURL request for it to be successful.
CoreDNS searches based on the resolv.conf file contained within the deployed Pod. You can confirm this by staying within the container shell and running
cat /etc/resolv.conf
.Example Output:
root@nginx-7854ff8877-vwwc7:/# cat /etc/resolv.conf search mytest.svc.cluster.local svc.cluster.local cluster.local vcn.oraclevcn.com lv.vcn.oraclevcn.com nameserver 100.96.0.10 options ndots:5
The values after the line starting with search tell CoreDNS which Domains to search, starting with mytest.svc.cluster.local. This setting is why you can search for deployments within a Namespace without including the Namespace's name. The next search value listed is anything in the svc.cluster.local domain, which explains why you only had to include the deployment name and its Namespace when using cURL. You could always use the Fully Qualified Domain Name (FQDN) like this: apache-svc.default.svc.cluster.local - but that requires much more typing. For more information about how CoreDNS handles DNS name resolution for Kubernetes Services and Pods, look at the upstream documentation .
Next Steps
Kubernetes uses CoreDNS to provide DNS services and Service Discovery within the Kubernetes cluster. Hopefully, you better understand how DNS within a Kubernetes cluster works and how it can help you manage your application deployments.