Use an OCI Dynamic Inventory with Oracle Linux Automation Engine

Send lab feedback

Use an OCI Dynamic Inventory with Oracle Linux Automation Engine


Oracle Linux Automation Engine, the open-source software for provisioning and configuration management, uses an inventory file to work against managed nodes or hosts in your infrastructure. This inventory file contains a list of servers, their IP addresses, and other optional connection information.

A static inventory file works well if your infrastructure hardly changes.

However, your infrastructure is likely in constant flux when using the cloud. Therefore it would be great to have a way to have your inventory dynamically updated as hosts come and go.


In this lab, you'll learn to:

  • Setup Oracle Linux Automation Engine
  • Create an OCI Dynamic Inventory
  • Use the OCI Dynamic Inventory with a Playbook.


  • A system with Oracle Linux 8 installed with the following configuration:
    • a non-root user with sudo permissions

Setup Lab Environment

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

Install Oracle Linux Automation Engine Control Node

The control node is the system from where the Oracle Linux Automation Engine playbooks run. Before running playbooks, installing the Oracle Linux Automation Engine packages is required.

  1. If not already connected, open a terminal and connect via ssh to the ol-control-node system.

    ssh oracle@<ip_address_of_ol-control-node>
  2. Install and enable the Oracle Linux Automation Manager repo.

    This repository contains the supported version of Oracle Linux Automation Engine.

    sudo dnf install -y oraclelinux-automation-manager-release-el8
    sudo dnf config-manager --enable ol8_automation
  3. Install the Oracle Linux Automation Engine package and dependencies.

    sudo dnf install -y ansible
  4. Test the package installation.

    ansible --version

    The output will display the commands version, configuration details, and python version dependency.

    Example output:

    ansible 2.9.27
      config file = /etc/ansible/ansible.cfg
      configured module search path = ['/home/oracle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
      ansible python module location = /usr/lib/python3.6/site-packages/ansible
      executable location = /usr/bin/ansible
      python version = 3.6.8 (default, Nov 10 2021, 06:50:23) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3.0.2)]

Setup Oracle Cloud Infrastructure SDK for Python

The OCI Dynamic Inventory plugin requires a working OCI SDK for Python configuration on the control node.

  1. Install the OCI SDK for Python

    sudo dnf install -y python36-oci-sdk
  2. Create SDK default configuration directory.

    mkdir -p ~/.oci
  3. Create the SDK default configuration file

    The free lab provides a pre-generated SDK configuration which we can copy to the control node.

    1. Open a new terminal from the desktop environment.

      Note: Do not connect to the ol-control-node.

    2. Copy all of the SDK configuration files to the control node.

      scp ~/.oci/* oracle@<ip_address_of_ol-control-node>:~/.oci/.

    Example Configuration:

    The following example shows key values in a configuration file.


    Note: When following this tutorial outside of the free lab environment, see the instructions provided within the SDK and CLI Configuration File and Required Keys and OCIDs sections of the OCI Documentation.

  4. Switch to the terminal window connected to the control node.

  5. Update the location of the key_file in the SDK configuration file.

    When copying the SDK configuration file from the desktop environment, we must modify the user's home directory portion of the key_file.

    sed -i 's/luna.user/oracle/g' ~/.oci/config

    If you modify the file manually with your favorite editor of choice, you can replace /home/luna.user with the shorthand syntax of ~.

  6. Verify that the SDK configuration works.

    1. Switch or open a terminal to the control node.

    2. Create a test Python script.

      echo 'import oci
      object_storage_client = oci.object_storage.ObjectStorageClient(oci.config.from_file())
      result = object_storage_client.get_namespace()
      print("Current object storage namespace: {}".format(' >

      The script displays the Object Storage Namespace for the configured OCI Tenancy and Compartment.

    3. Run the script


      Example Output:

      [oracle@ol-control-node ~]$ python3
      Current object storage namespace: frn7gzeg0xzn

    The namespace is unique to the configured tenancy.

Install the Oracle Cloud Infrastructure Ansible Collection

The OCI Ansible Collection contains a set of Ansible modules to automate cloud infrastructure provisioning and configuration, orchestration of complex operational processes, and deployment and update of your software assets.

  1. Install the OCI Ansible Collection.

    ansible-galaxy collection install oracle.oci

    If you already had a previous version installed, get the latest release by running with the --force option.

    ansible-galaxy collection install --force oracle.oci

Working with OCI Dynamic Inventory

Oracle includes its dynamic inventory plugin in its OCI Ansible Collection.

  1. Create a project directory.

    cd ~
    mkdir myproject
    cd myproject
  2. Configure the inventory plugin by creating a YAML configuration source.

    The source filename needs to be <filename>.oci.yml or <filename>.oci.yaml. Where <filename> is a user-defined useful identifier.

    cat << EOF > myproject.oci.yml
    plugin: oracle.oci.oci
    # Optional fields to specify oci connection config:
    config_file: ~/.oci/config
    config_profile: DEFAULT

Test the Inventory Plugin

  1. Create an inventory graph.

    ansible-inventory -i myproject.oci.yml --graph

    Example Output:

    ansible-inventory -i myproject.oci.yml --graph
    [WARNING]:  * Failed to parse /home/oracle/myproject/myproject.oci.yml with
    auto plugin: Compartment
    either does not exist or you don't have permission to access it. complete error
    : {'opc-request-id': '7DAB68C994C1487BA5DDA511775A42F3/C7C0F8988AC247B9DF065A80
    6764D799/12DA93943F59BABCE34CD40668838B61', 'code': 'NotAuthorizedOrNotFound',
    'message': 'Authorization failed or requested resource not found', 'status':
    [WARNING]:  * Failed to parse /home/oracle/myproject/myproject.oci.yml with
    yaml plugin: Plugin configuration YAML file, not YAML inventory
    [WARNING]:  * Failed to parse /home/oracle/myproject/myproject.oci.yml with ini
    plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed,
    this character is reserved to provide a port.
    [WARNING]: Unable to parse /home/oracle/myproject/myproject.oci.yml as an
    inventory source
    [WARNING]: No inventory was parsed, only implicit localhost is available

    The output shows warnings and errors. So what went wrong?

    The error occurs because the plugin requires knowing the compartment ocid.

    Note: Providing the tenancy ocid and having the correct level of permissions will generate an inventory for the entire tenancy.

    Since the plugin cannot read the compartment ocid information directly from the SDK configuration file, add it to the plugin source file.

  2. Grab the compartment ocid from the SDK configuration file and assign it to the variable comp_ocid.

    comp_ocid=$(grep -i compartment ~/.oci/config | sed -e 's/compartment-id=//g')
  3. Append a compartment entry to the plugin source file.

    cat << EOF >> myproject.oci.yml
      - compartment_ocid: "$comp_ocid"
        fetch_compute_hosts: true

    The fetch_compute_hosts set to true results in the inventory only gathering information on compute hosts and ignoring other instance types deployed within the compartment.

  4. Rerun the test.

    ansible-inventory -i myproject.oci.yml --graph

    Example Output:

    [oracle@ol-control-node myproject]$ ansible-inventory -i myproject.oci.yml --graph
    [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
      |  |--
      |  |--
      |  |--
      |  |--
      |  |--
      |  |--

    Our example shows the single control node instance as a listing of inventory groups designated by the @ character.

    What if we wanted the private IP address? You might need to do this based on the location of the controller node within the configured cloud infrastructure or because the requested compute instances only have a private IP address.

  5. Change Hostname Format

    Add the following section to the plugin source file.

    cat << EOF >> myproject.oci.yml
      - "private_ip"
      - "public_ip"

    The example format above will prioritize a system's private IP address over the public IP address. For more details on this configuration, see Hostname Format Preferences in the documentation.

  6. Test again.

    ansible-inventory -i myproject.oci.yml --graph

    Example Output:

    [oracle@ol-control-node myproject]$ ansible-inventory -i myproject.oci.yml --graph
    [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
      |  |--
      |  |--
      |  |--
      |  |--
      |  |--
      |  |--

Run a Playbook

With the dynamic inventory setup and configured, we can use it to run a simple playbook.

  1. Before moving forward, return the hostname format preferences to the default settings.

    sed -i '/hostname_format/,$d' myproject.oci.yml; sed -i '$d' myproject.oci.yml

    Note: This step is necessary as the lab environment does not know how to communicate with the private IP address.

  2. Create a playbook that pings the host.

    cat << EOF > ping.yml
    - hosts: all
      - name: Ansible ping test

    The - hosts: all will ping all systems reported in the inventory as @all is the top-level group. You can modify this playbook to use a different group from the graph output, and be sure to remove the @ character.

  3. Run the playbook.

    ansible-playbook -u opc -i myproject.oci.yml ping.yml

    Accept the ECDSA key fingerprint when prompted.

    The -i option sets the dynamic inventory file used.

    When attempting a connection, the -u option sets the remote SSH user. Since we already connected as oracle to the system we plan to ping; we need to specify a different SSH account.

    Example Output:

    [oracle@ol-control-node myproject]$ ansible-playbook -i myproject.oci.yml ping.yml -u opc
    [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
    PLAY [all] *************************************************************************************************************
    TASK [Gathering Facts] *************************************************************************************************
    [WARNING]: Platform linux on host is using the discovered Python interpreter at /usr/bin/python, but
    future installation of another Python interpreter could change this. See for more information.
    ok: []
    TASK [Ansible ping test] ***********************************************************************************************
    ok: []
    PLAY RECAP *************************************************************************************************************            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

For More Information

See other related resources: