Manage KVM Virtual Machines using Oracle Linux Automation Manager

0
0
Send lab feedback

Manage KVM Virtual Machines using Oracle Linux Automation Manager

Introduction

The community.libvirt collection provides libvirt modules and plugins supported by the Ansible libvirt community. These modules and plugins help manage virtual machines (VMs) and containers using the libvirt API.

This tutorial shows how to use this collection in Oracle Linux Automation Manager and interacts with an Oracle Linux instance running KVM.

Objectives

In this lab, you'll learn how to:

  • Create a playbook that uses the community.libvirt collection
  • Configure credentials for Ansible Galaxy
  • Create a Job Template
  • Run the Job

Prerequisites

  • A system with Oracle Linux Automation Manager installed
  • Access to a git repository
  • An Oracle Linux system with KVM installed

Note: For details on installing Oracle Linux Automation Manager or KVM, see the links at the end of this lab.

Create a Playbook

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

Information: The free lab environment deploys a running single-host Oracle Linux Automation Manager and a KVM and Git server. The deployment takes approximately 20 minutes to finish after launch. Therefore, you might want to step away while this runs and promptly return to complete the lab.

  1. Open a terminal on the free lab desktop.

  2. Create the project directory.

    mkdir ~/olamkvm
    cd ~/olamkvm
  3. Create a requirements file.

    Oracle Linux Automation Engine uses the requirements file to pull any required collections or roles into the project at runtime.

    cat << EOF > requirements.yml 
    ---
    collections:
      - name: community.libvirt
      - name: community.general
      - name: community.crypto
    EOF
  4. Create a variables file.

    The file stores variables and their default values. The playbook imports this file at runtime.

    1. Create a directory to store the file.

      mkdir vars
    2. Create the file.

      cat << EOF > vars/defaults.yml
      ---
      username: oracle
      base_image_name: OL9U1_x86_64-kvm-b158.qcow
      base_image_url: https://yum.oracle.com/templates/OracleLinux/OL9/u1/x86_64/{{ base_image_name }}
      base_image_sha: ca655beba34038349827c5ab365df4f7936a7f6226a04d0452bbe4430f4d6658
      libvirt_pool_dir: "/var/lib/libvirt/images"
      vm_name: ol9-dev
      vm_vcpus: 2
      vm_ram_mb: 2048
      vm_net: default
      vm_root_pass: 
      cleanup_tmp: no 
      EOF
  5. Create the cloud-init templates.

    These templates create the meta-data and the user-data files to provision the VM configuration via cloud-init.

    1. Create a directory to store the templates.

      mkdir templates
    2. Create the meta-data template.

      cat << EOF > templates/meta-data.j2
      instance-id: iid-local01
      local-hostname: {{ vm_name }}
      EOF
    3. Create the user-data template.

      cat << EOF > templates/user-data.j2
      #cloud-config
      
      system_info:
        default_user:
          name: opc
                   
      ssh_authorized_keys:
        - {{ vm_private_key }}
      EOF
  6. Create the libvirt VM definition template.

    cat << EOF > templates/vm-template.xml.j2
    <domain type="kvm">
      <name>{{ vm_name }}</name>
      <memory unit='MiB'>{{ vm_ram_mb }}</memory>
      <vcpu placement='static'>{{ vm_vcpus }}</vcpu>
      <os>
        <type arch="x86_64" machine="q35">hvm</type>
        <boot dev="hd"/>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode="host-model"/>
      <clock offset="utc">
        <timer name="rtc" tickpolicy="catchup"/>
        <timer name="pit" tickpolicy="delay"/>
        <timer name="hpet" present="no"/>
      </clock>
      <pm>
        <suspend-to-mem enabled="no"/>
        <suspend-to-disk enabled="no"/>
      </pm>
      <devices>
        <emulator>/usr/libexec/qemu-kvm</emulator>
        <disk type="file" device="disk">
          <driver name="qemu" type="qcow2"/>
          <source file="{{ libvirt_pool_dir }}/{{ vm_name }}.qcow"/>
          <target dev="vda" bus="virtio"/>
        </disk>
        <disk type="file" device="cdrom">
          <driver name="qemu" type="raw"/>
          <source file="{{ libvirt_pool_dir }}/{{ vm_name}}.iso"/>
          <target dev="sda" bus="sata"/>
          <readonly/>
        </disk>
        <controller type="usb" model="qemu-xhci" ports="15"/>
        <interface type="network">
          <source network="{{ vm_net }}"/>
          <model type="virtio"/>
        </interface>
        <console type="pty"/>
        <channel type="unix">
          <source mode="bind"/>
          <target type="virtio" name="org.qemu.guest_agent.0"/>
        </channel>
        <memballoon model="virtio"/>
        <rng model="virtio">
          <backend model="random">/dev/urandom</backend>
        </rng>
      </devices>
    </domain>
    EOF
  7. Create a playbook.

    This playbook queries the KVM server for existing VMs and then deploys a new Oracle Linux Cloud Image.

    cat << EOF > create_vm.yml
    ---
    - name: create vm with community.libvirt collection
      hosts: kvm
      collections:
        - community.libvirt
        - community.crypto
      become: yes
      
      vars_files:
        - vars/defaults.yml
    
      tasks:
    
      - name: get list of existing VMs
        community.libvirt.virt:
          command: list_vms
        register: existing_vms
        changed_when: no
    
      - name: print list of existing VMs
        debug:
          var: existing_vms
    
      - name: create VM when not exist
        block:
    
        - name: download base image
          get_url:
            url: "{{ base_image_url }}"
            dest: "/tmp/{{ base_image_name }}"
            checksum: "sha256:{{ base_image_sha }}"
    
        - name: copy base image to libvirt directory
          ansible.builtin.copy:
            dest: "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow"
            src: "/tmp/{{ base_image_name }}"
            force: no
            remote_src: yes 
            owner: qemu
            group: qemu
            mode: 0660
          register: copy_results
    
        - name: generate a vm ssh keypair
          community.crypto.openssh_keypair:
            path: ~/.ssh/id_rsa
            size: 2048
            comment: vm ssh keypair
          register: vm_ssh_keypair
          become_user: "{{ username }}"
    
        - name: create vm meta-data
          ansible.builtin.template:
            src: templates/meta-data.j2
            dest: "~/meta-data"
          become_user: "{{ username }}"
    
        - name: read the vm ssh private key
          slurp:
            src: "~/.ssh/id_rsa.pub"
          register: vm_ssh_private_key
          become_user: "{{ username }}"
    
        - name: create var for private key
          ansible.builtin.set_fact:
            vm_private_key: "{{ vm_ssh_private_key.content | b64decode }}"
    
        - name: create vm user-data
          ansible.builtin.template:
            src: templates/user-data.j2
            dest: ~/user-data
          become_user: "{{ username }}"
          
        - name: generate iso containing cloud-init configuration
          shell: |
            genisoimage -output /tmp/{{ vm_name }}.iso -volid cidata -joliet -rock ~/user-data ~/meta-data
          become_user: "{{ username }}"
    
        - name: copy vm iso image to libvirt directory
          ansible.builtin.copy:
            dest: "{{ libvirt_pool_dir }}/{{ vm_name }}.iso"
            src: "/tmp/{{ vm_name }}.iso"
            force: no
            remote_src: yes 
            owner: qemu
            group: qemu
            mode: 0660
        
        - name: remove vm iso image from tmp
          ansible.builtin.file:
            path: "/tmp/{{ vm_name }}.iso"
            state: absent
    
        - name: define the vm
          community.libvirt.virt:
            command: define
            xml: "{{ lookup('template', 'vm-template.xml.j2') }}"
    
        when: ( vm_name not in existing_vms.list_vms )
    
      - name: start the vm
        community.libvirt.virt:
          name: "{{ vm_name }}"
          state: running
        register: vm_start_results
        until: "vm_start_results is success"
        retries: 15
        delay: 2
    
      - name: remove the temporary file
        file:
          path: "/tmp/{{ base_image_name }}"
          state: absent
        when: cleanup_tmp | bool
    EOF
    

Add the Project to Source Control

  1. Initialize the project's working directory into a Git repository.

    Before initializing the repository, you need to perform some Git first-time setup steps.

    1. Set the default branch name used when initializing a project.

      git config --global init.defaultBranch main
    2. Set your Identity.

      Note: The email and name below are examples within this tutorial. Using your actual email and name outside this tutorial is essential, and this information gets immutably baked into each of your commits.

      git config --global user.email johndoe@example.com
      git config --global user.name "John Doe"
      
    3. Initialize the directory as a local Git repository.

      git init

      The command returns that it Initialized the empty Git repository.

  2. Check the state of the working directory and project staging area.

    git status

    The state of the local repository indicates two untracked files, create_vm.yml and requirements.yml, and the directories vars and templates.

  3. Add and track the new files in the staging area.

    git add --all

    The --all option adds all untracked and changed files to the staging area.

  4. Commit the changes currently in the staging area.

    git commit -m 'initial commit'

    The -m option allows adding a comment to the committed changes.

  5. Create and initialize the remote Git repository.

    The free lab environment provides a Git server where we'll push the playbook project along with all the other dependencies.

    A remote is a shared repository all project contributors use and stored on a code-hosting service like GitHub or a self-hosted server.

    ssh git@<hostname or ip address of the Git server> "git init --bare /git-server/repos/olamkvm.git"

    Use the public IP address shown for the git-server VM on the free lab environment's Luna Lab Resources page.

    Accept the ECDSA key fingerprint to continue if a prompt appears.

    Example Output:

    [luna.user@lunabox olamkvm]$ ssh git@130.61.55.145 "git init --bare /git-server/repos/olamkvm.git"
    The authenticity of host '130.61.55.145 (130.61.55.145)' can't be established.
    ECDSA key fingerprint is SHA256:evXYs/yvZPo+tik03B9Q/2J8GCL/E+NPTA3rvcmcQdc.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '130.61.55.145' (ECDSA) to the list of known hosts.
    Initialized empty Git repository in /git-server/repos/olamkvm.git/
  6. Add the new remote repository connection record.

    After adding the remote to the local repository, you can use it as a shortcut in other Git commands.

    git remote add origin git@<hostname or IP address of the Git server>:/git-server/repos/olamkvm.git

    The IP address is the address of the remote Git server, and the path after the colon is the repository's location on the remote Git server.

  7. Verify the newly added connection record.

    git remote -v

    Example Output:

    [luna.user@lunabox olamkvm]$ git remote -v
    origin	git@130.61.55.145:/git-server/repos/olamkvm.git (fetch)
    origin	git@130.61.55.145:/git-server/repos/olamkvm.git (push)

    The output shows the connection record origin pointing to the remote Git repository location for both the fetch and push Git commands.

  8. Push local repository changes to the remote repository.

    git push origin main

    Example Output

    [luna.user@lunabox olamkvm]$ git push origin main
    Enumerating objects: 4, done.
    Counting objects: 100% (4/4), done.
    Delta compression using up to 4 threads
    Compressing objects: 100% (4/4), done.
    Writing objects: 100% (4/4), 468 bytes | 468.00 KiB/s, done.
    Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
    To 130.61.55.145:/git-server/repos/olamkvm.git
     * [new branch]      main -> main

With the create_vm.yml sample code existing on the remote Git repository, the playbook is available to an Oracle Linux Automation Manager Project.

Create Ansible Galaxy Credentials

These credentials allow Oracle Linux Automation Manager to pull the OCI Ansible Collection from the public Ansible Galaxy Hub.

  1. Open a terminal from the Luna Desktop and configure an SSH tunnel to the deployed Oracle Linux Automation Manager instance.

    ssh -L 8444:localhost:443 oracle@<hostname or ip address>

    In the free lab environment, use the IP address of the ol-node VM as it runs the Oracle Linux Automation Manager deployment.

  2. Open a web browser and enter the URL.

    https://localhost:8444

    Note: Approve the security warning based on the browser used. For Chrome, click the Advanced button and then the Proceed to localhost (unsafe) link.

  3. Log in to the Oracle Linux Automation Manager WebUI. Use the Username admin and the Password admin created during the automated deployment.

    olam2-login

  4. The WebUI displays after a successful login.

    olam2-webui

  5. Click Credentials under Resources in the navigation menu.

  6. Click the Add button.

  7. Enter or select the following values in the specific fields.

    For fields with a search or list of values, we can start typing the requested value and then select it.

    • Name: My Ansible Galaxy
    • Organization: Default
    • Credential Type: Ansible Galaxy/Automation Hub API Token

    Oracle Linux Automation Manager uses the Ansible Galaxy credentials to set the order to download roles/collections using the ansible-galaxy command.

    • Galaxy Server URL: https://galaxy.ansible.com

    olam2-ansible-galaxy-creds

  8. Review the entries and click the Save button.

  9. Click Organizations under Access in the navigation menu.

    Assigning the Ansible Galaxy credential within the Organization enables the download of any required roles or collections from within the git project.

  10. Click the Default organization link and then the Edit button.

  11. Select the My Ansible Galaxy credential in the Galaxy Credentials field and click the Select button.

  12. Review and click the Save button.

    olam2-ansible-galaxy-org

Create Machine Credentials

These credentials allow Oracle Linux Automation Manager to connect via ssh to the KVM virtualization system.

  1. Click Credentials under Resources in the navigation menu.

  2. Click the Add button.

  3. Enter or select the following values in the specific fields.

    For fields with a search or list of values, we can start typing the requested value and then select it.

    • Name: My KVM Server
    • Organization: Default
    • Credential Type: Machine

    Oracle Linux Automation Manager uses the Machine credentials to set the information required when establishing an ssh connection to a host.

    The page refreshes, requesting the Type Details.

    • Username: oracle

    olam2-machine-creds

    1. Click the Browse button for the SSH Private Key.

      A dialog box appears displaying the Open File window.

    2. Right-click in the central panel of the Open File window and select Show Hidden Files in the pop-up dialog box.

      olam2-open-file

    3. Click anywhere in the central panel to dismiss the dialog box.

    4. Click the Home location in the navigation menu on the left side of the Open File window.

    5. Double-click the .ssh folder in the list, then double-click the id_rsa file..

      This action copies the contents of the id_rsa file to the SSH Private Key.

  4. Review the entries, then scroll to the bottom and click the Save button.

Create an Inventory

  1. Click Inventories in the navigation menu.

  2. Click the Add button and select Add inventory from the drop-down list of values.

  3. Enter or select the following values in the specific fields.

    • Name: KVM Servers
    • Instance Groups: controlplane
  4. Review and click the Save button.

    olam2-inv

Add a Group to an Inventory

A group within an inventory is a classification of hosts or other groups that allow controlling a set of hosts for a given task.

  1. Click the Groups tab on the KVM Servers Details page.

    olam2-inv-kvm-details

  2. Click the Add button.

  3. Enter or select the following values in the specific fields.

    • Name: kvm
  4. Review and click the Save button.

    olam2-kvm-group-details

Add a Host to the Inventory Group

  1. Click the Hosts tab on the kvm Group details page.

    olam2-kvm-group-details

  2. Click the Add button and select Add new host from the drop-down list of values.

  3. Enter or select the following values in the specific fields.

    • Name: hostname, fqdn, or IP address of the host

    In the free lab environment, use the IP address of the kvm-server VM listed on the Luna Lab Resources page.

  4. Review and click the Save button.

    olam2-grp-new-host

Ping the Inventory Group

Use the ping module to verify Oracle Linux Automation can connect to the host within the inventory group.

  1. Use the breadcrumbs and click on KVM Servers.

  2. Click the Groups tab.

  3. Check the box next to the kvm group and click the Run Command button.

    The Run Command pop-up dialog appears.

  4. Select the ping module and click the Next button.

  5. Select the OLAM EE (latest) Execution Environment and click the Next button.

  6. Select the My KVM Server Machine Credential and click the Next button.

  7. Review and click the Launch button.

    A job launches and displays the output from the ping module.

    olam2-kvm-grp-ping

Add Source Control Credential

  1. Click Credentials in the navigation menu.

  2. Click the Add. button.

  3. Enter or select the following values in the specific fields.

    • Name: Git Server
    • Organization: Default
    • Credential Type: Source Control

    olam2-git-cred1

    The page refreshes, requesting the Type Details.

    • Username: git
    • SCM Private Key:

    Copy and paste the contents of the ~/.ssh/id_rsa file from the free lab environment desktop into the SCM Private Key field.

    olam2-git-cred2

  4. Review and click the Save button.

    olam2-git-cred3

Create a Project

  1. Click Projects in the navigation menu.

  2. Click the Add button.

  3. Enter or select the following values in the specific fields.

    • Name: My Project
    • Execution Environment: OLAM EE (latest)
    • Source Control Credential Type: Git

    olam2-proj1

    The page refreshes, requesting the Type Details.

    • Source Control URL: git@<hostname or ip address of git server>:/git-server/repos/olamkvm.git
    • Source Control Credential: Git Server

    Replace the hostname or IP address in the Source Control URL with the actual git server in the free lab environment.

  4. Review and click the Save button.

    olam2-proj2

  5. Review the project sync status.

    After project creation, the project will display its status in the Details summary as the sync begins. The status transitions from Running to Successful if the configuration is correct and the git server is reachable.

    olam2-proj-sync

Create a Job Template

  1. Click Templates in the navigation menu.

  2. Click the Add button and select Add job template from the drop-down list of values.

  3. Enter the required values.

    • Name: My Template
    • Job Type: Run
    • Inventory: KVM Servers
    • Project: My Project
    • Execution Environment: OLAM EE (latest)
    • Playbook: create_vm.yml
    • Credentials: My KVM Server

    olam2-temp1

  4. Review, scroll down, and click the Save button.

    olam2-temp2

  5. Launch the template.

    Launch a job from the template summary page by clicking the Launch button.

    olam2-temp-launch

    If successful, the job launches and displays the output from the template. The standard output shows the playbook running and outputs the results of the playbook.

    olam2-temp-output

Verify Virtual Machine Creation

  1. Open a terminal and connect via ssh to the kvm-server node.

    ssh oracle@<hostname or IP address>
  2. Get a list of running VMs.

    sudo virsh list

    Example Output:

    [oracle@kvm-server ~]$ sudo virsh list
     Id   Name      State
    -------------------------
     1    ol9-dev   running
  3. Get the IP address of the ol9-dev VM.

    sudo virsh net-dhcp-leases default

    Example Output:

    [oracle@kvm-server ~]$ sudo virsh net-dhcp-leases default
     Expiry Time           MAC address         Protocol   IP address           Hostname   Client ID or DUID
    ------------------------------------------------------------------------------------------------------------
     2023-04-06 18:59:33   52:54:00:6e:93:07   ipv4       192.168.122.167/24   ol9-dev    01:52:54:00:6e:93:07
  4. Connect to the VM.

    ssh opc@<ip address of the virtual machine>
  5. Disconnect from the VM.

    exit

(Optional) Create Another Virtual Machine

The playbook allows the creation of another VM by changing the vm_name variable.

  1. Switch to the browser window containing the Oracle Linux Automation Manager WebUI, and log in if necessary.

  2. Click Templates in the navigation menu.

  3. Click the Edit Template icon for My Template.

    olam2-temp-edit

  4. Add the variable vm_name with a value of ol9-new to the Variables section.

    olam2-temp-vars

  5. Scroll down and click the Save button.

  6. Launch the template.

  7. Repeat the virtual machine verification steps and connect to the newly created VM.

Summary

The Oracle Linux Automation Manager job output and the ability to connect using ssh to the virtual machine confirm everything works.

For More Information

Oracle Linux Automation Manager Documentation
Oracle Linux Automation Manager Installation Guide
Oracle Linux Automation Manager Training
Create VMs with KVM on Oracle Linux
Oracle Linux Training Station

SSR