Use an OCI Dynamic Inventory with Oracle Linux Automation Engine
Introduction
Oracle Linux Automation Engine, the open-source software for provisioning and configuration management, uses an inventory file to work against managed nodes or hosts in your infrastructure. This inventory file contains a list of servers, their IP addresses, and other optional connection information.
A static inventory file works well if your infrastructure hardly changes.
However, your infrastructure is likely in constant flux when using the cloud. Therefore, it would be great to have a way to have your inventory dynamically updated as hosts come and go.
Objectives
In this tutorial, you'll learn to:
- Setup Oracle Linux Automation Engine
- Create an OCI dynamic inventory
- Use the OCI dynamic inventory with a playbook
Prerequisites
A minimum of two Oracle Linux systems with the following configuration:
- a non-root user with sudo permissions
- ssh keypair for the non-root user
- the ability to SSH from one host to another using a passwordless SSH login
Deploy Oracle Linux Automation Engine
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
Open a terminal on the Luna Desktop.
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
Change into the working directory.
cd linux-virt-labs/olam
Install the required collections.
ansible-galaxy collection install -r requirements.yml
Update the Oracle Linux instance configuration.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ol-control-node" type: "control" 2: instance_name: "ol-host" type: "remote" EOF
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@instances.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Linux is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install Oracle Linux Automation Engine Control Node
The control node is the system from where the Oracle Linux Automation Engine playbooks run. Before running playbooks, installing the Oracle Linux Automation Engine packages is required.
Open a new terminal and connect via SSH to the ol-control-node system.
ssh oracle@<ip_address_of_instance>
Install the Oracle Linux Automation Engine package and dependencies.
sudo dnf install -y ansible-core
The ansible-core package is available in the ol8_appstream repository.
Test the package installation.
ansible --version
Review the output and look for the default version of Python Oracle Linux Automation Manager. That is the environment where we must install the Oracle Cloud Infrastructure (OCI) SDK for Python.
Note: If the output shows ERROR: Ansible requires the locale encoding to be UTF-8; Detected None., this indicates an incorrect locale setting for
ansible
. Fix the issue by setting these two environment variables:export LC_ALL="en_US.UTF-8" export LC_CTYPE="en_US.UTF-8"
Setup Oracle Cloud Infrastructure SDK for Python
The OCI Dynamic Inventory plugin requires a working OCI SDK for Python configuration on the control node. We can install the OCI SDK using the Oracle Linux RPM or PIP, the package installer for Python.
Install the OCI SDK for Python using PIP.
Install the packages and dependencies for PIP.
sudo dnf install -y python3.12-pip python3.12-setuptools
Install the Python packages
/usr/bin/python3.12 -m pip install oci
Add the
--proxy
option if you are behind a proxy. Details are available in the help by running the commandpython3.12 -m pip help install
.
Test the OCI SDK for Python installation by printing its version.
python3.12 -c "import oci;print(oci.__version__)"
Create the OCI SDK default configuration directory.
mkdir -p ~/.oci
Create the SDK default configuration file
The free lab provides a pre-generated SDK configuration, which we can copy to the ol-control-node system useing
scp
.Open a new terminal from the desktop environment.
Copy all of the SDK configuration files to the ol-control-node system.
scp ~/.oci/* oracle@<ip_address_of_instance>:~/.oci/.
exit
If you're following this tutorial outside of the free lab environment, see the instructions provided within the SDK and CLI Configuration File and Required Keys and OCIDs sections of the OCI Documentation to generate your OCI configuration file.
Switch to the terminal window connected to the ol-control-node system.
Update the location of the
key_file
in the SDK configuration file.When copying the SDK configuration file from the desktop environment, we must modify the user's home directory portion of the
key_file
.sed -i 's/luna.user/oracle/g' ~/.oci/config
Create a test Python script to verify the SDK is working.
cat << EOF | tee test.py > /dev/null import oci object_storage_client = oci.object_storage.ObjectStorageClient(oci.config.from_file()) result = object_storage_client.get_namespace() print("Current object storage namespace: {}".format(result.data)) EOF
The
test.py
script displays the Object Storage namespace for the configured OCI Tenancy and Compartment.Run the script
python3.12 test.py
The test script successfully prints the unique namespace of the configured tenancy.
Install the Oracle Cloud Infrastructure Ansible Collection
The OCI Ansible Collection contains a set of modules to automate cloud infrastructure provisioning and configuration, orchestration of complex operational processes, and deployment and update of your software assets.
Create a project directory.
mkdir ~/myproject
Create a requirements file.
cat << EOF | tee ~/myproject/requirements.yml > /dev/null --- collections: - name: oracle.oci EOF
Install the OCI Ansible Collection.
ansible-galaxy collection install -r ~/myproject/requirements.yml
If you have installed a previous version, get the latest release by running the command with the
--force
option.ansible-galaxy collection install --force oracle.oci
Working with OCI Dynamic Inventory
Oracle includes its dynamic inventory plugin in the OCI Ansible Collection.
Configure the inventory plugin by creating a YAML configuration source.
The source filename needs to be
<filename>.oci.yml
or<filename>.oci.yaml
. Where<filename>
is a user-defined helpful identifier.cat << EOF | tee ~/myproject/myproject.oci.yml > /dev/null --- plugin: oracle.oci.oci # Optional fields to specify oci connection config: config_file: ~/.oci/config config_profile: DEFAULT EOF
Test the inventory plugin by creating an inventory graph.
ansible-inventory -i ~/myproject/myproject.oci.yml --graph
The output shows a series of warnings and errors. So what went wrong?
The error occurs because the plugin requires knowing the compartment OCID. If you provide the tenancy OCID rather than the compartment OCID and have the correct permissions, the plugin will generate an inventory for the entire tenancy.
Since the plugin cannot read the compartment OCID information directly from the SDK configuration file, add it to the plugin configuration source file.
Grab the compartment OCID from the SDK configuration file and assign it to the variable comp_ocid.
comp_ocid=$(grep -i compartment ~/.oci/config | sed -e 's/compartment-id=//g')
Append a compartment parameter to the plugin source file.
cat << EOF | tee -a ~/myproject/myproject.oci.yml > /dev/null compartments: - compartment_ocid: "$comp_ocid" fetch_compute_hosts: true EOF
The
fetch_compute_hosts
set totrue
results in the inventory only gathering information on compute hosts and ignoring other instance types deployed within the compartment.Rerun the test.
ansible-inventory -i ~/myproject/myproject.oci.yml --graph
Our example shows the compute instances available within the compartment as a listing of inventory groups designated by the
@
character and displays the instance's public IP address.What if we wanted the private IP address?
Grabbing the private IP address is necessary based on the physical location of the controller node or the configured network topology within the cloud infrastructure. Another reason for grabbing the private IP address is when the requested compute instances only have a private IP address.
Change the plugin hostname format parameter by updating the plugin configuration source file.
cat << EOF | tee -a ~/myproject/myproject.oci.yml > /dev/null hostname_format_preferences: - "private_ip" - "public_ip" EOF
The example format above will prioritize a system's private IP address over the public IP address. For more details on this configuration, see Hostname Format Preferences in the documentation.
Retest the plugin.
ansible-inventory -i ~/myproject/myproject.oci.yml --graph
The output now displays the private IP address.
Run a Playbook
With the dynamic inventory setup and configured, we can use it to run a simple playbook. Ensure you enable SSH access between your control nodes and any remotes. We've configured this using passwordless SSH logins during the initial deployment of the instances.
Create a playbook that pings the host.
cat << EOF | tee ~/myproject/ping.yml > /dev/null --- - hosts: all,!$(hostname -i) tasks: - name: Ansible ping test ansible.builtin.ping: EOF
Oracle Linux Automation Engine expects a comma-separated list of hosts or groups after the
- hosts:
entry, and the!
indicates that it should exclude those entries. Theall
entry will ping each host shown in the inventory as@all
within the top-level group. You can modify this playbook to use a different group from the graph output by removing the@
character from its name and entering that name into the- hosts:
entry.Run the playbook.
ansible-playbook -u opc -i ~/myproject/myproject.oci.yml ~/myproject/ping.yml
Accept the ECDSA key fingerprint when prompted.
The
-i
option sets the dynamic inventory file used.The
-u
option sets the remote SSH user when attempting a connection.
Next Steps
Completing the playbook run with an ok status confirms that Oracle Linux Automation Engine successfully uses the OCI dynamic inventory to communicate with the remote instance it discovers within the compartment. Continue learning and use this feature to help manage your fleet of OCI instances and perform routine administration tasks on Oracle Linux.