Use ock-forge with Oracle Cloud Native Environment
Introduction
The Oracle Container Host for Kubernetes Image Builder (OCK Image Builder) tool builds the Oracle Container Host for Kubernetes (OCK) images used in Oracle Cloud Native Environment (Oracle CNE) deployments. OCK Image Builder helps if the default OCK image used by Oracle CNE does not meet your requirements like needing:
- A partition layout different from the standard OCK image.
- Extra packages or missing device drivers.
OCK Image Builder is a collection of shell scripts, the primary one being ock-forge to generate a bootable qcow2 format image to create the cluster nodes or an OSTree container image that Oracle CNE can use to update nodes in a running container.
Objectives
In this tutorial, you will learn to:
- Install, set up, and use
ock-forgeto build a customized OCK image - Create an Oracle CNE cluster using the customized OCK image
- Include
extraIgnitionInline:changes to the OCK image either as a default for all clusters created or when used with an individual cluster
Prerequisites
Minimum of one Oracle Linux 9 instance
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- A working KVM libvirt environment.
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labsGitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.gitChange into the working directory.
cd linux-virt-labs/ocne2Install the required collections.
ansible-galaxy collection install -r requirements.ymlIncrease the Boot volume size, install libvirt, and use Oracle Linux 9.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ocne" type: "server" instance_ocpus: 8 instance_memory: 128 boot_volume_size_in_gbs: 256 ocne_type: "libvirt" install_ocne_rpm: true update_all: true os_version: "9" EOFNote: OCK Image Builder works best on Oracle Linux 9.
Deploy the lab environment.
Install using custom config.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@instances.yml"The free lab environment requires the extra variable
local_python_interpreter, which setsansible_python_interpreterfor plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"or-e os_version="9"to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install Oracle Container Host for Kubernetes Builder
Oracle Container Host for Kubernetes Builder (ock-forge) is a command-line tool for building bootable media Oracle Container Host for Kubernetes (OCK) OSTree-based images. ock-forge can generate virtual machine images in qcow2 format, raw disk images, or by writing directly to an existing block device. This tutorial demonstrates how to build and use a customized .qcow2 image.
Open a terminal and connect via SSH to the ocne instance.
ssh oracle@<ip_address_of_node>Install Git and Podman.
sudo dnf install -y git container-toolsThe
ock-forgeprogram requires Podman and uses Git to clone theock-forgeand Oracle Container Host for Kubernetes Configuration(OCK Configuration) repositories from GitHub.Clone the
ock-forgerepository.git clone https://github.com/oracle-cne/ock-forgeClone the OCK Configuration repository.
ock-forgeuses the OCK Configuration treefile specification for OCK to build an image.cd ock-forge/ git clone https://github.com/oracle-cne/ock
Build an OCK Image
Next, you will create a network block device, add some extra Linux packages to the OCK image you will build, and then use ock-forge to perform all of the work required to generate a bootable OCK image.
Enable a Network Block Device
The Network Block Device (NBD) is used by ock-forge to mount the generated .qcow2 image so Oracle CNE can build a Kubernetes cluster.
Load the NBD kernel module and assign eight partitions to it.
sudo modprobe nbd max_part=8Confirm NBD devices are present.
ls -l /dev/nbd*Example Output:
[oracle@ocne ~]$ ls -l /dev/nbd* brw-rw----. 1 root disk 43, 0 Jan 15 13:55 /dev/nbd0 brw-rw----. 1 root disk 43, 16 Jan 15 13:55 /dev/nbd1 brw-rw----. 1 root disk 43, 160 Jan 15 13:55 /dev/nbd10 brw-rw----. 1 root disk 43, 176 Jan 15 13:55 /dev/nbd11 brw-rw----. 1 root disk 43, 192 Jan 15 13:55 /dev/nbd12 brw-rw----. 1 root disk 43, 208 Jan 15 13:55 /dev/nbd13 brw-rw----. 1 root disk 43, 224 Jan 15 13:55 /dev/nbd14 brw-rw----. 1 root disk 43, 240 Jan 15 13:55 /dev/nbd15 brw-rw----. 1 root disk 43, 32 Jan 15 13:55 /dev/nbd2 brw-rw----. 1 root disk 43, 48 Jan 15 13:55 /dev/nbd3 brw-rw----. 1 root disk 43, 64 Jan 15 13:55 /dev/nbd4 brw-rw----. 1 root disk 43, 80 Jan 15 13:55 /dev/nbd5 brw-rw----. 1 root disk 43, 96 Jan 15 13:55 /dev/nbd6 brw-rw----. 1 root disk 43, 112 Jan 15 13:55 /dev/nbd7 brw-rw----. 1 root disk 43, 128 Jan 15 13:55 /dev/nbd8 brw-rw----. 1 root disk 43, 144 Jan 15 13:55 /dev/nbd9
Customize the OCK Image Build
You can define extra Linux packages to add to the OCK image built by ock-forge. The treefile specification that ock-forge uses is in the GitHub OCK Configuration repository.
Create a directory for the customization file.
mkdir /home/oracle/ock-forge/ock/configs/config-1.31/customCreate a customization file to install the Vim and Emacs packages.
cat << EOF | tee /home/oracle/ock-forge/ock/configs/config-1.31/custom/myconfig.yaml > /dev/null packages: - vim - emacs EOFInclude the customization file in the
manifest.yamlfile.Note: You can add any additional customizations to the Kubernetes configuration file for the targeted Kubernetes build.
sed -i '23i - custom/myconfig.yaml' /home/oracle/ock-forge/ock/configs/config-1.31/manifest.yamlConfirm the new customization file appears in the OCK image build sequence defined in the
manifest.yamlfile.cat /home/oracle/ock-forge/ock/configs/config-1.31/manifest.yamlExample Output:
[oracle@ocne ock-forge]$ cat /home/oracle/ock-forge/ock/configs/config-1.31/manifest.yaml ref: ock automatic-version-prefix: "1.31" documentation: false boot-location: modules machineid-compat: false ... ... include: - base.yaml - ux.yaml - ocne.yaml - removals.yaml - config.yaml - custom/myconfig.yaml ... ... modules: enable: - container-tools:ol8 - virt:kvm_utils3
Build the OCK Image
Use
ock-forgeto create the qcow2 image.sudo ./ock-forge -d /dev/nbd0 -D out/1.31/boot.qcow2 -i container-registry.oracle.com/olcne/ock-ostree:1.31 -O ./out/1.31/archive.tar -C ./ock -c configs/config-1.31 -PWhere:
-d: The path to an existing block device.-D: The path to the disk image file.-i: A fully qualified container image name, including a tag.-C: A directory containing a set of rpm-ostree configurations.-c: A directory containing the rpm-ostree configuration to build.-P: If used, this flag causes the block device specified by -d to wipe its partition table and repopulate it with the default geometry.
Press Enter to accept the default container registry (
container-registry.oracle.com/ock-builder:latest).Note: The build process takes approximately 30 minutes to complete.
Example Output:
[oracle@ocne ock-forge]$ sudo ./ock-forge -d /dev/nbd0 -D out/1.31/boot.qcow2 -i container-registry.oracle.com/olcne/ock-ostree:1.31 -O ./out/1.31/archive.tar -C ./ock -c configs/config-1.31 -P + [[ -z '' ]] + [[ -z '' ]] + IGNITION_PROVIDER=qemu + [[ -n out/1.31/boot.qcow2 ]] ++ realpath -m out/1.31/boot.qcow2 + DISK=/home/oracle/ock-forge/out/1.31/boot.qcow2 ... ... + podman image exists ock-builder:latest + podman pull ock-builder:latest ? Please select an image: ▸ container-registry.oracle.com/ock-builder:latest docker.io/library/ock-builder:latest
Modify the OCK Image
After the OCK Image build process ends, it is possible to create users, create files, configure the network, define systemd units, and much more to the qcow2 file using Butane config YAML files conforming to Butane's schema. For more details, refer to the upstream Butane documentation.
Next, you will create a Butane-compliant YAML file to create a text file and set up a new user.
Using a Default Configuration
You can apply customizations in one of two ways:
- Use it as the default for all Oracle CNE clusters you create.
- To apply a bespoke configuration to a single Oracle CNE cluster.
The following steps demonstrate how to make your customization the default for all Oracle CNE clusters you create.
Make the
.ocnedirectory.mkdir /home/oracle/.ocneAdd the
defaults.yamlfile.cat << EOF | tee /home/oracle/.ocne/defaults.yaml > /dev/null extraIgnitionInline: | variant: fcos version: 1.5.0 storage: files: - path: /etc/myfile.txt contents: inline: Hello, world! mode: 0644 user: id: 1000 group: id: 1001 EOFWhere (see upstream documentation for more detail) :
variant:- Must be set tofcosfor use with Oracle CNEversion:- Must be set to1.5.0for use with Oracle CNEpath:-/etcmyfile.txt(the path and filename for the created file)mode:- Set to:644(you have read & write everyone else can only read it)user:&group:- Assign file ownership to the UID and GID specified. This example sets it to theocneUID and GID
Confirm the file created.
cat /home/oracle/.ocne/defaults.yaml
Test the New OCK Image Created By ock-forge.
Copy the .qcow2 image to the Oracle CNE install.
sudo cp /home/oracle/ock-forge/out/1.31/boot.qcow2 /var/lib/libvirt/images/boot.qcow2-1.31Confirm the .qcow2 image copied.
sudo ls /var/lib/libvirt/imagesStart Oracle CNE using the .qcow2 image.
ocne cluster start --version 1.31 -n 1 -w 1 -u falseExample Output:
[oracle@ocne ock-forge]$ ocne cluster start --version 1.31 -n 1 -w 1 -u false INFO[2025-04-14T13:47:36Z] Creating new Kubernetes cluster with version 1.31 named ocne INFO[2025-04-14T13:48:34Z] Waiting for the Kubernetes cluster to be ready: ok INFO[2025-04-14T13:48:35Z] Installing core-dns into kube-system: ok INFO[2025-04-14T13:48:36Z] Installing kube-proxy into kube-system: ok INFO[2025-04-14T13:48:39Z] Installing kubernetes-gateway-api-crds into kube-system: ok INFO[2025-04-14T13:48:39Z] Installing flannel into kube-flannel: ok INFO[2025-04-14T13:48:40Z] Installing ui into ocne-system: ok INFO[2025-04-14T13:48:41Z] Installing ocne-catalog into ocne-system: ok INFO[2025-04-14T13:48:41Z] Kubernetes cluster was created successfully INFO[2025-04-14T13:48:41Z] Post install information: To access the cluster from the VM host: copy /home/oracle/.kube/kubeconfig.ocne.vm to that host and run kubectl there To access the cluster from this system: use /home/oracle/.kube/kubeconfig.ocne.local To access the UI, first do kubectl port-forward to allow the browser to access the UI. Run the following command, then access the UI from the browser using via https://localhost:8443 kubectl port-forward -n ocne-system service/ui 8443:443 Run the following command to create an authentication token to access the UI: kubectl create token ui -n ocne-systemConfirm the cluster exists.
ocne cluster listSet the
kubeconfigenvironment variable for your new cluster.export KUBECONFIG=$(ocne cluster show -C ocne)Get a list of your cluster nodes.
kubectl get nodesConnect to the Worker node.
ocne cluster console --direct --node ocne-worker-1Confirm that the
myfile.txtfile exists.ls -lsa /etc/myfile.txtExample Output:
sh-4.4# ls -lsa /etc/myfile.txt 4 -rw-r--r--. 1 ocne ocne 13 Apr 16 10:51 /etc/myfile.txtConfirm the
emacspackage installed.ls /bin/emacsExample Output:
sh-4.4# ls /bin/emacs /bin/emacsType
exitto leave theocne-worker-1node.Connect to the Control Plane node.
ocne cluster console --direct --node ocne-control-plane-1Confirm that the
myfile.txtfile exists.ls /etc/myfile.txtExample Output:
sh-4.4# ls -lsa /etc/myfile.txt 4 -rw-r--r--. 1 ocne ocne 13 Apr 16 10:50 /etc/myfile.txtConfirm the
emacspackage installed.ls /bin/emacsExample Output:
sh-4.4# ls /bin/emacs /bin/emacsConfirming you have customized the default Oracle CNE cluster nodes by adding a text file (
/etc/myfile.txt) and package (emacs) that are not present by default.Type
exitto leave theocne-control-plane-1node.
Remove the Cluster
Delete the cluster.
ocne cluster deleteDelete the
defaults.yamlfile.rm /home/oracle/.ocne/defaults.yaml
Update a Single Cluster's Configuration
Next, you will create and apply a customization file to a single Oracle CNE cluster.
Create a configuration file.
cat << EOF | tee /home/oracle/myconfig.yaml > /dev/null provider: libvirt headless: true name: ocne kubernetesVersion: 1.31 controlPlaneNodes: 1 workerNodes: 1 extraIgnitionInline: | variant: fcos version: 1.5.0 storage: files: - path: /etc/myfile.txt contents: inline: Hello, world! mode: 0644 user: id: 1000 group: id: 1001 EOFConfirm the file created.
cat /home/oracle/myconfig.yamlStart Oracle CNE using the .qcow2 image.
ocne cluster start -u false -c /home/oracle/myconfig.yamlExample Output:
[oracle@ocne ~]$ ocne cluster start -u false -c /home/oracle/myconfig.yaml INFO[2025-04-15T18:07:00Z] Creating new Kubernetes cluster with version 1.31 named ocne INFO[2025-04-15T18:08:14Z] Waiting for the Kubernetes cluster to be ready: ok INFO[2025-04-15T18:08:16Z] Installing core-dns into kube-system: ok INFO[2025-04-15T18:08:16Z] Installing kube-proxy into kube-system: ok INFO[2025-04-15T18:08:19Z] Installing kubernetes-gateway-api-crds into kube-system: ok INFO[2025-04-15T18:08:20Z] Installing flannel into kube-flannel: ok INFO[2025-04-15T18:08:20Z] Installing ocne-catalog into ocne-system: ok INFO[2025-04-15T18:08:20Z] Kubernetes cluster was created successfully INFO[2025-04-15T18:08:20Z] Post install information: To access the cluster from the VM host: copy /home/oracle/.kube/kubeconfig.ocne.vm to that host and run kubectl there To access the cluster from this system: use /home/oracle/.kube/kubeconfig.ocne.localSet the
kubeconfigenvironment variable for your new cluster.export KUBECONFIG=$(ocne cluster show -C ocne)Get a list of your cluster nodes.
kubectl get nodesConnect to the Worker node.
ocne cluster console --direct --node ocne-worker-1Confirm that the
myfile.txtfile exists.ls -lsa /etc/myfile.txtConfirm the
emacspackage installed.ls /bin/emacsType
exitto leave theocne-worker-1node.Connect to the Control Plane node.
ocne cluster console --direct --node ocne-control-plane-1Confirm that the
myfile.txtfile exists.ls -lsa /etc/myfile.txtConfirm the
emacspackage installed.ls /bin/emacsConfirming you have customized the default Oracle CNE cluster nodes by adding a text file (
/etc/myfile.txt) and package (emacs) that are not present by default.Type
exitto leave theocne-control-plane-1node.
Remove the Cluster
Delete the cluster.
ocne cluster delete
Next Steps
Customizing the Oracle CNE OCK image files allows you to modify the environment used on your Oracle CNE Kubernetes cluster nodes. Continue expanding your knowledge in Kubernetes and Oracle Cloud Native Environment by looking at our other tutorials posted to the Oracle Linux Training Station.