Deploy OpenShift Virtualization on IBM Cloud Bare Metal nodes

Updated

This article describe how to deploy OpenShift Virtualization on IBM Cloud Bare Metal nodes. Please note this is a Technical Preview in OpenShift Virtualization versions 4.10 to 4.13.

Introduction

This document describes the process required to deploy a functional, standard OpenShift Container Platform cluster in IBM Cloud on top of Bare Metal Servers, using Assisted Installer.

Prerequisites

  • Account in IBM Cloud with permissions to order and operate Bare Metal servers.
  • IBM Cloud SSL VPN user, to access server’s SuperMicro IPMI interface.

Overview

The OCP cluster deployed in IBM Cloud environment consists of 6 bare metal servers : 3 masters and 3 workers, all of which are connected to a private network. In addition, a virtual machine is required for bootstrapping operations and acting as a Samba server, DHCP server, Gateway and Load Balancer. Lastly, valid DNS "A" Records should be created for the cluster’s name and subdomains to access it from outside the environment.

Procedure

  1. Create a new virtual server instance in the desired IBM Cloud location of your choice. This will be used for the bastion machine, which we will be using as a support machine to run the installation and provide services for the environment.
  1. Once created, check which private VLAN it was assigned to in Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/classic/network/vlans and to which subnet.
  2. Order 6 Bare metal servers :
  1. Once all the bare metal servers have been provisioned and ready at Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/gen1/infrastructure/devices, rename their names to be master-[0,1,2].<subdomain_name> and worker-[0,1,2].<subdomain_name>
  2. Install and configure DHCP server on the Bastion machine :
  • Configure the Bastion machine as a default gateway

  • Use this configuration for /etc/dhcp/dhcpd.conf (please replace the domain-name and IP to match your environment) :

        #
        # DHCP Server Configuration file.
        #   see /usr/share/doc/dhcp-server/dhcpd.conf.example
        #   see dhcpd.conf(5) man page
    
        # Set DNS name and DNS server's IP address or hostname
        option domain-name	"bm.ibm.cluster.example.com";
        option domain-name-servers 	8.8.8.8;
    
        # Declare DHCP Server
        authoritative;
    
        # The default DHCP lease time
        default-lease-time 600;
    
        # Set the maximum lease time
        max-lease-time 7200;
    
        # Set Network address, subnet mask and gateway
    
        subnet 10.60.128.0 netmask 255.255.255.192 {
          # Range of IP addresses to allocate
          range dynamic-bootp 10.60.128.10 10.60.128.35;
          # Provide broadcast address
          option broadcast-address 10.60.128.63;
          # Set default gateway
          option routers 10.60.128.38;
        }
    
  • Execute systemctl restart dhcpd on the Bastion machine

  1. Enable IP Forwarding on Bastion :
  • Execute sysctl -w net.ipv4.ip_forward=1
  • Verify with sysctl -p /etc/sysctl.conf
  • Restart network service service network restart
  1. Enable NAT on Bastion :
  • Start and enable firewalld if it’s not enabled already (you can verify with firewall-cmd --state) :
    systemctl enable firewalld
    systemctl start firewalld
  • Add NAT rules :
    firewall-cmd --add-masquerade --permanent
    firewall-cmd --reload
  1. Login to the Assisted Installer service and create a new cluster
  • Cluster name : the name that will be used to identify the cluster under the base domain.
  • Base domain : the same subdomain used in step 3
  • Click Next
  • Click "Generate Discovery ISO" and provide your SSH RSA public key. This will be used later to connect to the cluster’s nodes.
  • Copy and save the wget command for the ISO file from S3.
  1. Set up Samba server on the Bastion machine :
  • Install and enable smb :

    dnf install samba
    systemctl enable smb --now
    
  • Open firewall rules :

    firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba
    firewall-cmd --reload
    
  • Create a password for the root user :

    sudo smbpasswd -a root
    
  • Create a share directory :

    mkdir share
    cd share/
    
  • On the share directory, download the ISO file from the assisted installer using the wget command from step 8.

  • Use the following configuration for /etc/samba/smb.conf :

        # See smb.conf.example for a more detailed config file or
        # read the smb.conf manpage.
        # Run 'testparm' to verify the config is correct after
        # you modified it.
    
        [global]
              log level = 3
                  workgroup = SAMBA
                  security = user
    
                  passdb backend = tdbsam
    
                  printing = cups
                  printcap name = cups
                  load printers = yes
                  cups options = raw
    
              server min protocol = NT1
              ntlm auth = yes
    
        [share]
              comment = ISO Files
              path = /root/share
              browseable = yes
              public = no
              read only = no
              directory mode = 0555
              valid users = root
    
  • Restart SMB service and verify it is running and active :

    systemctl restart smb
    systemctl status smb
    
  1. Set Up SSL VPN Access to IBM Cloud
  1. Open IPMI console for each of the Bare metal servers.
  • IP address and credentials for the IPMI can be found under "Remote management" section for each server.
  1. For each bare metal server, on the IPMI console, open the page for mounting ISO file :
  • Virtual Media -> CD-ROM Image
  • Share host : the private IP of the Bastion machine
  • Path to image : \share\${ISO_FILENAME}
  • User : root
  • Password : the password you’ve set in step 9.
  • Click Save and Mount
  • Verify that one of the slots has an iso mounted.
  1. Restart all bare metal servers :
  • Remote Control -> Power Control -> Reset Server -> Perform Action
  1. Go back to the Assisted Installer page.
  • It is possible at this point to select if "OpenShift Virtualization" and/or "OpenShift Container Storage" need to be deployed on the resulting cluster. To add them to the deployment, simply click the checkbox(es).
  • The hosts should start appearing at the table.
  • Select role for each host - 3 masters (control plane) and 3 worker nodes.
  • Wait for all nodes to become ready and click "Next"
  • On the next page, select "Cluster Managed Networking" and select the checkbox to get API VIP and Ingress VIP from DHCP (or statically set them), then click "Install".
  • At some point, the AI UI will ask you to disconnect the media from the CD-ROM. Go back to the IPMI console of each server, Virtual Media -> CD-ROM Image, and click on "Unmount". Then, reboot the server.
  • Wait for the installation to be completed, download the kubeconfig file and save the kubeadmin password.
  1. HAProxy :
  • On the Bastion machine, install and configure haproxy to receive requests from the Internet and forward them to the API and Ingress VIPs of the cluster internally.

  • Use the following configuration (please replace the IP addresses to match your environment) :

        #---------------------------------------------------------------------
        # Example configuration for a possible web application.  See the
        # full configuration options online.
        #
        #   https://www.haproxy.org/download/1.8/doc/configuration.txt
        #
        #---------------------------------------------------------------------
    
        #---------------------------------------------------------------------
        # Global settings
        #---------------------------------------------------------------------
        global
          # to have these messages end up in /var/log/haproxy.log you will
          # need to:
          #
          # 1) configure syslog to accept network log events.  This is done
          #	by adding the '-r' option to the SYSLOGD_OPTIONS in
          #	/etc/sysconfig/syslog
          #
          # 2) configure local2 events to go to the /var/log/haproxy.log
          #   file. A line like the following can be added to
          #   /etc/sysconfig/syslog
          #
          #	local2.*                   	/var/log/haproxy.log
          #
          log     	127.0.0.1 local2
    
          chroot  	/var/lib/haproxy
          pidfile 	/var/run/haproxy.pid
          maxconn 	4000
          user    	haproxy
          group   	haproxy
          daemon
    
          # turn on stats unix socket
          stats socket /var/lib/haproxy/stats
    
          # utilize system-wide crypto-policies
          #ssl-default-bind-ciphers PROFILE=SYSTEM
          #ssl-default-server-ciphers PROFILE=SYSTEM
    
        #---------------------------------------------------------------------
        # common defaults that all the 'listen' and 'backend' sections will
        # use if not designated in their block
        #---------------------------------------------------------------------
        defaults
          mode                	tcp
          log                 	global
          option              	httplog
          option              	dontlognull
          option http-server-close
          option forwardfor   	except 127.0.0.0/8
          option              	redispatch
          retries             	3
          timeout http-request	10s
          timeout queue       	1m
          timeout connect     	10s
          timeout client      	1m
          timeout server      	1m
          timeout http-keep-alive 10s
          timeout check       	10s
          maxconn             	3000
        #---------------------------------------------------------------------
        # main frontend which proxys to the backends
        #---------------------------------------------------------------------
    
        frontend api
          bind 108.168.136.75:6443
          default_backend controlplaneapi
    
        frontend apiinternal
          bind 108.168.136.75:22623
          default_backend controlplaneapiinternal
    
        frontend secure
          bind 108.168.136.75:443
          default_backend secure
    
        frontend insecure
          bind 108.168.136.75:80
          default_backend insecure
    
        #---------------------------------------------------------------------
        # static backend
        #---------------------------------------------------------------------
    
        backend controlplaneapi
          balance source
          server api 10.60.128.30:6443 check
    
        backend controlplaneapiinternal
          balance source
          server api 10.60.128.30:22623 check
    
        backend secure
          balance source
          server ingress 10.60.128.34:443 check
    
        backend insecure
          balance source
          server ingress 10.60.128.34:80 check
    
  1. DNS
  • Configure two A records for the subdomain, to be publicly available over the Internet :
<public ip of bastion> api.<cluster_name>.<cluster_domain>
<public ip of bastion> *.apps..<cluster_name>.<cluster_domain>

Now you can reach the cluster using the kubeconfig and the console link in the assisted installer final page.

Category
Components
Article Type