Deploy OpenShift Virtualization on IBM Cloud Bare Metal nodes
This article describe how to deploy OpenShift Virtualization on IBM Cloud Bare Metal nodes. Please note this is a Technical Preview in OpenShift Virtualization versions 4.10 to 4.13.
Introduction
This document describes the process required to deploy a functional, standard OpenShift Container Platform cluster in IBM Cloud on top of Bare Metal Servers, using Assisted Installer.
Prerequisites
- Account in IBM Cloud with permissions to order and operate Bare Metal servers.
- IBM Cloud SSL VPN user, to access server’s SuperMicro IPMI interface.
Overview
The OCP cluster deployed in IBM Cloud environment consists of 6 bare metal servers : 3 masters and 3 workers, all of which are connected to a private network. In addition, a virtual machine is required for bootstrapping operations and acting as a Samba server, DHCP server, Gateway and Load Balancer. Lastly, valid DNS "A" Records should be created for the cluster’s name and subdomains to access it from outside the environment.
Procedure
- Create a new virtual server instance in the desired IBM Cloud location of your choice. This will be used for the bastion machine, which we will be using as a support machine to run the installation and provide services for the environment.
- Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/gen1/infrastructure/provision/vs
- Select
CentOS 8.xfor the Operating System - Add your public SSH RSA public key
- Type of virtual server : Public
- All other settings can remain the default
- Once created, check which private VLAN it was assigned to in Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/classic/network/vlans and to which subnet.
- Order 6 Bare metal servers :
- Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/gen1/infrastructure/provision/bm
- Domain : enter a valid subdomain you can add records to
- Quantity : 6
- Location : same as the bastion VM
- Storage disks : RAID 1 setup. This is the default and only setting possible for hourly billing.
- Network interface :
- Private Only
- Private VLAN : same as the one of the bastion VM IMPORTANT
- Once all the bare metal servers have been provisioned and ready at Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/gen1/infrastructure/devices, rename their names to be
master-[0,1,2].<subdomain_name>andworker-[0,1,2].<subdomain_name> - Install and configure DHCP server on the Bastion machine :
-
Configure the Bastion machine as a default gateway
-
Use this configuration for
/etc/dhcp/dhcpd.conf(please replace the domain-name and IP to match your environment) :# # DHCP Server Configuration file. # see /usr/share/doc/dhcp-server/dhcpd.conf.example # see dhcpd.conf(5) man page # Set DNS name and DNS server's IP address or hostname option domain-name "bm.ibm.cluster.example.com"; option domain-name-servers 8.8.8.8; # Declare DHCP Server authoritative; # The default DHCP lease time default-lease-time 600; # Set the maximum lease time max-lease-time 7200; # Set Network address, subnet mask and gateway subnet 10.60.128.0 netmask 255.255.255.192 { # Range of IP addresses to allocate range dynamic-bootp 10.60.128.10 10.60.128.35; # Provide broadcast address option broadcast-address 10.60.128.63; # Set default gateway option routers 10.60.128.38; } -
Execute
systemctl restart dhcpdon the Bastion machine
- Enable IP Forwarding on Bastion :
- Execute
sysctl -w net.ipv4.ip_forward=1 - Verify with
sysctl -p /etc/sysctl.conf - Restart network service
service network restart
- Enable NAT on Bastion :
- Start and enable firewalld if it’s not enabled already (you can verify with
firewall-cmd --state) :
systemctl enable firewalld
systemctl start firewalld - Add NAT rules :
firewall-cmd --add-masquerade --permanent
firewall-cmd --reload
- Login to the Assisted Installer service and create a new cluster
- Cluster name : the name that will be used to identify the cluster under the base domain.
- Base domain : the same subdomain used in step 3
- Click Next
- Click "Generate Discovery ISO" and provide your SSH RSA public key. This will be used later to connect to the cluster’s nodes.
- Copy and save the wget command for the ISO file from S3.
- Set up Samba server on the Bastion machine :
-
Install and enable smb :
dnf install samba systemctl enable smb --now -
Open firewall rules :
firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba firewall-cmd --reload -
Create a password for the root user :
sudo smbpasswd -a root -
Create a share directory :
mkdir share cd share/ -
On the share directory, download the ISO file from the assisted installer using the
wgetcommand from step 8. -
Use the following configuration for
/etc/samba/smb.conf:# See smb.conf.example for a more detailed config file or # read the smb.conf manpage. # Run 'testparm' to verify the config is correct after # you modified it. [global] log level = 3 workgroup = SAMBA security = user passdb backend = tdbsam printing = cups printcap name = cups load printers = yes cups options = raw server min protocol = NT1 ntlm auth = yes [share] comment = ISO Files path = /root/share browseable = yes public = no read only = no directory mode = 0555 valid users = root -
Restart SMB service and verify it is running and active :
systemctl restart smb systemctl status smb
- Set Up SSL VPN Access to IBM Cloud
- Follow the instructions : Content from cloud.ibm.com is not included.Content from cloud.ibm.com is not included.https://cloud.ibm.com/docs/iaas-vpn?topic=iaas-vpn-getting-started
- Download MotionPro SSL VPN client
- Connect to the relevant IBM Cloud endpoint : Content from www.ibm.com is not included.Content from www.ibm.com is not included.https://www.ibm.com/cloud/vpn-access
- Connect using the following command :
sudo MotionPro --host ${VPN_ENDPOINT} --user ${SSL_VPN_USERNAME} --passwd ${SSL_VPN_PASSWORD} - NOTE : Once connected to IBM Cloud SSL VPN, you’ll lose access to Red Hat network (because the same VPNs use a private address scope of 10.0.0.0/8).
- Open IPMI console for each of the Bare metal servers.
- IP address and credentials for the IPMI can be found under "Remote management" section for each server.
- For each bare metal server, on the IPMI console, open the page for mounting ISO file :
- Virtual Media -> CD-ROM Image
- Share host : the private IP of the Bastion machine
- Path to image :
\share\${ISO_FILENAME} - User : root
- Password : the password you’ve set in step 9.
- Click Save and Mount
- Verify that one of the slots has an iso mounted.
- Restart all bare metal servers :
- Remote Control -> Power Control -> Reset Server -> Perform Action
- Go back to the Assisted Installer page.
- It is possible at this point to select if "OpenShift Virtualization" and/or "OpenShift Container Storage" need to be deployed on the resulting cluster. To add them to the deployment, simply click the checkbox(es).
- The hosts should start appearing at the table.
- Select role for each host - 3 masters (control plane) and 3 worker nodes.
- Wait for all nodes to become ready and click "Next"
- On the next page, select "Cluster Managed Networking" and select the checkbox to get API VIP and Ingress VIP from DHCP (or statically set them), then click "Install".
- At some point, the AI UI will ask you to disconnect the media from the CD-ROM. Go back to the IPMI console of each server, Virtual Media -> CD-ROM Image, and click on "Unmount". Then, reboot the server.
- Wait for the installation to be completed, download the kubeconfig file and save the kubeadmin password.
- HAProxy :
-
On the Bastion machine, install and configure haproxy to receive requests from the Internet and forward them to the API and Ingress VIPs of the cluster internally.
-
Use the following configuration (please replace the IP addresses to match your environment) :
#--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # https://www.haproxy.org/download/1.8/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats # utilize system-wide crypto-policies #ssl-default-bind-ciphers PROFILE=SYSTEM #ssl-default-server-ciphers PROFILE=SYSTEM #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend api bind 108.168.136.75:6443 default_backend controlplaneapi frontend apiinternal bind 108.168.136.75:22623 default_backend controlplaneapiinternal frontend secure bind 108.168.136.75:443 default_backend secure frontend insecure bind 108.168.136.75:80 default_backend insecure #--------------------------------------------------------------------- # static backend #--------------------------------------------------------------------- backend controlplaneapi balance source server api 10.60.128.30:6443 check backend controlplaneapiinternal balance source server api 10.60.128.30:22623 check backend secure balance source server ingress 10.60.128.34:443 check backend insecure balance source server ingress 10.60.128.34:80 check
- DNS
- Configure two A records for the subdomain, to be publicly available over the Internet :
<public ip of bastion> api.<cluster_name>.<cluster_domain>
<public ip of bastion> *.apps..<cluster_name>.<cluster_domain>
Now you can reach the cluster using the kubeconfig and the console link in the assisted installer final page.