Deploy HAProxy using Podman

0
0
Send lab feedback

Deploy HAProxy using Podman

Introduction

HAProxy is a well known, and widely used, open source solution delivering load balancing and proxy services for both HTTP (Layer 7) and TCP (Layer 4) which is achieved by spreading incoming request across multiple servers. For more details of the services that HAProxy does and does not provide please refer to the upstream documentation . HAProxy can be installed locally on Oracle Linux , or run as a container using Podman. This lab describes how to use HAProxy with Podman.

Clarification: This document follows the long established custom which suggests that the term "HAProxy" is used to represent the product, whereas "haproxy" is used to represent the executable. Although many other sources use these forms interchangeably.

Objectives

This lab shows how to:

  • Use HAProxy as a Podman-based container
  • Configure a simple deployment using three back-end servers
  • Confirm the deployment runs as expected

Note: The steps provided do not include how to configure HAProxy to use certificates. Therefore its recommended for non-production purposes or an internal/air-gapped environment.

Requirements

Four systems with Oracle Linux and Podman installed, whose responsibilities are divided like this:

Server NameRole/Purpose
ol-serverHosts HAProxy load balancer
web01, web02, web03Hosts the web application

Setup the Lab Environment

Note: When using the free lab environment, see Oracle Linux Lab Basics for connection and other usage instructions.

  1. Open a terminal and connect via ssh to the ol-server instance if not already connected.

    ssh oracle@<ip_address_of_instance>

    Important: Use the same process to connect to each of the back-end servers (web01, web02 & web03) as needed.

How to Use This Lab

IMPORTANT: Because there are four servers used in setting up the example as presented, please assume that all steps are completed on the 'Front-End' server, which in this scenario is ol-server. Whenever any step needs to be completed on all of the 'Back-End' servers this will be clearly indicated at the beginning of the step. The back-end servers in this scenario are:

  • web01, web02 and web03

(Optional) Confirm Podman Works

The container-tools package in Oracle Linux provides the latest versions of Podman, Buildah, Skopeo, and associated dependencies.

  1. Check the version of Podman.

    podman -v
  2. Confirm the Podman CLI is working.

    podman run quay.io/podman/hello

    Example Output:

    [oracle@ol-server ~]$ podman run quay.io/podman/hello
    Trying to pull quay.io/podman/hello:latest...
    Getting image source signatures
    Copying blob f82b04e85914 done  
    Copying config dbd85e09a1 done  
    Writing manifest to image destination
    Storing signatures
    !... Hello Podman World ...!
    
             .--"--.           
           / -     - \         
          / (O)   (O) \        
       ~~~| -=(,Y,)=- |         
        .---. /`  \   |~~      
     ~/  o  o \~~~~.----. ~~   
      | =(X)= |~  / (O (O) \   
       ~~~~~~~  ~| =(Y_)=-  |   
      ~~~~    ~~~|   U      |~~ 
    
    Project:   https://github.com/containers/podman
    Website:   https://podman.io
    Documents: https://docs.podman.io
    Twitter:   @Podman_io

Configure the Load Balancer (HAProxy)

A load balancer's function is to handle incoming web traffic by forwarding incoming traffic evenly across all of the HTTP servers in order to balance the load. The load balancer can also detect if any of the servers in it's pool become unresponsive for any reason and automatically stop forwarding incoming traffic to it. In this scenario the load balancer being used is HAProxy. However this is not a requirement. Instead any load balancer of choice can be substituted. Additionally in this scenario the load balancer tier is not configured for high availability. Normally a production deployment would require at two instances of a load balancer configured in either active-active or active-standby modes to ensure service levels are maintained. If this is required then follow the load balancer's documentation to configure it for high availability.

The following steps will demonstrate how to deploy HAProxy and configure three HTTP servers, that are deployed behind it, to confirm that HAProxy works.

  1. Get the HAProxy Container Image.

    podman pull docker.io/haproxytech/haproxy-alpine:2.7

Create the HAProxy Configuration File

HAProxy is very versatile in the way it can be configured, therefore the haproxy image does not ship with a configuration file. So before continuing a new configuration (haproxy.cfg) file has to be made.

  1. Create a new directory for the HAProxy configuration and change into it.

    mkdir haproxy
    cd haproxy
    
  2. Create the configuration file and enter the HAProxy configuration details that will be used for this test.

    Populate the newly created haproxy.cfg file.

    cat > haproxy.cfg <<EOF
    global
      # Bind the Runtime API to a UNIX domain socket, and/or an IP address
      stats socket /var/run/api.sock user haproxy group haproxy mode 660 level admin expose-fd listeners
      log stdout format raw local0 info
      
    defaults
      # Set the Proxy mode to http (Layer 7) or tcp (Layer 4)
      mode http
      timeout connect 10s
      timeout server 1m
      timeout client 1m
      timeout http-request 10s
      log global
      
    frontend stats
      bind *:8404
      stats enable
      stats uri /
      stats refresh 10s
      
    frontend myfrontend
      # Receive HTTP traffic on all IP addresses assigned to the server on Port 80
      bind :80
    
      # Choose the default pool of backend servers (important if several pools are defined)
      default_backend webservers
      
    backend webservers
      # By default requests are sent to the server pool using round-robin load-balancing
      balance   roundrobin
    
      # Enable HTTP health checks (see 'check' at the end of each server definition)
      option httpchk
    
      # Define the servers where HTTP traffic will be forwarded.  Note that the format used is:
      # 
      # server <name> <hostname>:<listening port> check
      # Note: `check` is only required if the `option httpchk` directive is enabled.
      #
      server web01 web01:8080 check
      server web02 web02:8080 check
      server web03 web03:8080 check
    EOF
    

    NOTE: It is okay to ignore the following warnings (if they occur):

    -bash: check: command not found
    -bash: option: command not found
  3. Confirm the haproxy.cfg file's contents.

    Take a minute to review the comments contained within the haproxy.cfg file. These provide an overview of the configuration for this HAProxy server.

    cat ./haproxy.cfg

    Note: The HAProxy server won't be started yet. The next steps describe how to setup the three back-end server (web01, web02 and _web03) that HAProxy will forward any incoming HTTP traffic to.

Configure the HTTP Echo Servers

Set Firewall Rules

Open the firewall ports on each of the back-end servers (web01, web02 and _web03) to allow the web application that will be deployed to communicate with HAProxy.

  1. (On web01, web02 & web03) Update Firewall Rules.

    sudo firewall-cmd --permanent --add-port=8080/tcp
    sudo firewall-cmd --reload
    

    Example Output:

    [oracle@web01 ~]$ sudo firewall-cmd --permanent --add-port=8080/tcp
    success
    [oracle@web01 ~]$ sudo firewall-cmd --reload
    success

    Here is a list of how each port is used:

    • Port 8080: Used by the echo-server container

Start the Web Application on each of the Back-End Servers

  1. (On web01, web02 and web03) Start the echo-server container on each back-end server.

    This is used only to provide a backend web application to confirm that the HAProxy service is working as expected.

    podman run -d --name web01 -p 8080:8080 docker.io/jmalloc/echo-server:latest

    Important: Leave these terminal sessions open because they are needed for HAProxy to use as it's load balancing targets.

Set Firewall Rules and start the HAProxy Server

  1. (On ol-server) Update Firewall Rules to open the ports that HAProxy requires.

    sudo firewall-cmd --permanent --add-port=80/tcp
    sudo firewall-cmd --permanent --add-port=443/tcp
    sudo firewall-cmd --permanent --add-port=8404/tcp
    sudo firewall-cmd --reload
    

    Example Output:

    [oracle@ol-server ~]$ sudo firewall-cmd --permanent --add-port=80/tcp
    success
    [oracle@ol-server ~]$ sudo firewall-cmd --permanent --add-port=443/tcp
    success
    [oracle@ol-server ~]$ sudo firewall-cmd --permanent --add-port=8404/tcp
    success
    [oracle@ol-server ~]$ sudo firewall-cmd --reload
    success

    These ports are used as shown below:

    • Ports 80 and 443: Used by the HAProxy container
    • Port 8404: Used to display the HAProxy Statistics page
  2. (On ol-server) Start the HAProxy container.

    sudo podman run -d --name haproxy-run01 -v /home/oracle/haproxy:/usr/local/etc/haproxy:Z -p 80:80 -p 8404:8404 docker.io/haproxytech/haproxy-alpine:2.4

    Where:

    • -d instructs Podman to run the container in the background and print the new container ID
    • --name instructs Podman to assign this name to the container
    • -v instructs Podman to Podman bind mount a directory on the $HOST into a directory within the Podman container
    • -p instructs Podman to publish a port from the hostPORT to the container-PORT

Test the HAProxy server is Working

  1. (On ol-server) Test the HAProxy service is working as expected by entering the following on the command line.

    curl http://ol-server:80

    Example Output:

    [oracle@ol-server haproxy]$ curl http://ol-server:80
    Request served by 56cd8ee13f60
    
    GET / HTTP/1.1
    
    Host: ol-server
    Accept: */*
    User-Agent: curl/7.61.1
  2. (On ol-server) Confirm the HAProxy server is implementing the Round Robin load-balancing algorithm.

    Note: Watch the for first value returned to be cycled though three different values before returning to the initial value (e.g., Request served by 56cd8ee13f60).

    Repeat the curl request a further four times in order to verify/confirm that all three backend servers are visited.

    for i in {1..4}; do curl http://ol-server:80; done

    Example Output:

    Notice the initial and last values returned are the same - Request served by 56cd8ee13f60 NOTE that your returned value will be different.

    [oracle@ol-server haproxy]$ curl http://ol-server:80
    Request served by 56cd8ee13f60
    
    GET / HTTP/1.1
    
    Host: ol-server
    Accept: */*
    User-Agent: curl/7.61.1
    [oracle@ol-server haproxy]$ curl http://ol-server:80
    Request served by 7fae7a91aeb5
    
    GET / HTTP/1.1
    
    Host: ol-server
    Accept: */*
    User-Agent: curl/7.61.1
    [oracle@ol-server haproxy]$ curl http://ol-server:80
    Request served by 208eb12ad9be
    
    GET / HTTP/1.1
    
    Host: ol-server
    Accept: */*
    User-Agent: curl/7.61.1
    [oracle@ol-server haproxy]$ curl http://ol-server:80
    Request served by 56cd8ee13f60
    
    GET / HTTP/1.1
    
    Host: ol-server
    Accept: */*
    User-Agent: curl/7.61.1
    [oracle@ol-server haproxy]$

(Optional) Check the HAProxy Dashboard

Open a terminal from the Luna Desktop and configure an SSH tunnel to the deployed HAProxy instance.

  1. Establish an SSH tunnel

    ssh -L 9000:localhost:8404 oracle@<hostname or ip address>

    In the free lab environment, use the IP address for ol-server VM as this is where HAProxy is running.

    Example Output:

    [luna.user@lunabox Desktop]$ ssh -L 9000:localhost:8404 oracle@152.70.177.208
    Activate the web console with: systemctl enable --now cockpit.socket
  2. Open a browser on the Luna desktop and enter the URL for the HAProxy Frontend.

    http://localhost:9000

    Example Output:

    haproxy-console

    Notice that there are three sections shown on the browser's screen. These represent (from the top down):

    • stats section - This presents information related to the performance of the HAProxy load balancer process itself.
    • myfrontend section - This presents information related to the HTTP requests that have been received.
    • webservers section - This presents information for each of the nodes that together are defined in the haproxy.cfg file as representing the webservers section. Notice there are three nodes listed - web01, web02 and web03 (because all three are displayed with a green background indicates that they are all Live). The statistics for each node are listed separately with the aggregate total shown in the last row.

    This confirms that the local haproxy process is running from a Podman-based container and, as expected, forwarding incoming web requests to each of the back-end servers in turn.

Test Node Down and Up Notification Works

  1. (On web01) Test the HAProxy console reports when web01 goes offline (DOWN) and returns again (Up).

    podman stop web01; sleep 60; podman start web01 

    Example Output:

    HAProxy Console showing web01 down.

    web01-down

    HAProxy Console showing web01 up.

    web01-up

Next Steps

This completes this lab demonstrating how to install and run a HAProxy instance on Podman. However HAProxy has many more features and abilities which were outside of the scope of this lab such as these shown below.

  • Layer 4 (TCP) and 7 (HTTP) load balancing
  • PROXY protocol support
  • SSL/TLS termination
  • Content switching/inspection
  • Detailed logging
  • URL rewriting
  • Caching
  • Debugging and tracing facilities

More details can be located in the upstream HAProxy Documentation.

In the meantime, many thanks for taking the time to try this lab.

SSR