Administrative Procedures for RHEL High Availability Clusters - Preparing Machines for Inclusion in a RHEL 7 or 8 Cluster

Updated

Contents

Overview

Introduction

This guide offers Red Hat's recommended first steps in preparing to setup a RHEL High Availability cluster. A High Availability cluster involves several RHEL systems coordinating to manage some application, service, or set of resources. Before that cluster can be configured to serve this purpose, each machine that will function as a member of the cluster needs to have some basic setup tasks completed, and this reference gives an introduction to those steps.

Target Deployments

RHEL 7 and 8 High Availability clusters

Prerequisites

  • Have a general design for the cluster membership and physical layout - including number of nodes, their locations, network connections that will be used by the cluster, etc
  • Have subscriptions from Red Hat granting access to RHEL and RHEL High Availability software

Summary of Steps

  • Prepare servers
  • Install RHEL on all servers
  • Register all servers with Red Hat Subscription Manager and enable RHEL High Availability repositories
  • Quick Start: Install all High Availability packages
  • Alternative: Install only the necessary High Availability packages for this deployment
  • Configure network
  • Define node addresses, fence-devices in /etc/hosts
  • Configure firewall for High Availability communications
  • Prepare hacluster user account
  • Enable pcsd throughout cluster
  • Authorize pcs to access nodes

Preparation of the Environment

Prepare servers

Physically prepare the machines that will serve in this cluster - rack servers, connect cables, install virtual machines, etc.

Adhere to the layout that was chosen for your deployment - setting up the correct number of servers, deploying all in a single site or across multiple sites as needed, connecting nodes to the correct network(s) for cluster communications and application access, etc.


Install and configure RHEL on all servers

No special measures are required to enable High Availability during this stage. Install RHEL normally, making appropriate selections for these servers during the installation process.

See: RHEL 7 Installation Guide


Register all servers with Red Hat Subscription Manager and enable RHEL High Availability repositories

See: How to install High Availability and/or Resilient Storage cluster packages in Red Hat Enterprise Linux 6 or 7?.

Example:

  • Register the system with subscription-manager

    # subscription-manager register
    
  • Find the appropriate "pool" of subscriptions that contains support for RHEL High Availability for this system:

    # subscription-manager list --available --match-installed | less
    [... Scroll through and look for a subscription containing RHEL HA support, and copy the "Pool ID" ...]
    

    In that list, find the Pool ID for the subscription which contains something like " Red Hat Enterprise Linux High Availability (for RHEL Server)". Note that this may be labeled in different ways depending on the product purchased, but should reference High Availability in some way.

    • NOTE: Entries that end with "Extended Update Support" are a special class of subscription that does not necessarily grant the access needed to RHEL High Availability repositories. Look for a subscription that references "High Availability" and is not qualified with the term "Extended Update Support" or "EUS".
  • With the Pool ID in hand, subscribe the system:

    # # Syntax: # subscription-manager attach --pool=<pool ID>
    # # Example:
    # subscription-manager attach --pool=1234567890
    
  • Enable the High Availability repository. (This may already be enabled by-default, but confirming doesn't hurt).

    # subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
    

Quick Start: Install all High Availability packages

To install all High Availability packages that might be needed in a cluster, run:

# yum install pcs pacemaker fence-agents-all resource-agents sbd corosync-qdevice booth-site booth-core

To selectively install only the needed packages for this deployment, see the next section. If using the above quick-start command, skip the next section.


Alternative: Choose the specific packages for this deployment

These core cluster packages will always be needed, and they will pull in some dependencies:

# yum install pcs pacemaker 

If a STONITH method will be employed in this cluster, install the appropriate fence-agents package for it. The full list of packages can be produced with yum:

# yum search fence-agents

If the appropriate package is identified, install it:

# # Syntax: # yum install fence-agents-<agent>
# # Example:
# yum install fence-agents-sbd

To install all agents, such as if you want to examine all of them and decide which is best, install fence-agents-all

# yum install fence-agents-all

If sbd is expected to be used for fencing, install it:

# yum install sbd

If sbd with poison-pill fencing will be deployed, install the fence-agents package for it:

# yum install fence-agents-sbd

If corosync-qdevice will be used in this cluster in conjunction with a corosync-qnetd server, install that qdevice package

# yum install corosync-qdevice

If the cluster in question will operate as a booth "site" in a coordinating multi-cluster design, install booth-core and booth-site:

# yum install booth-core booth-site

Configure network

The network layout will differ for each cluster. In general, configure at least one interface to serve as an interconnect for the cluster nodes to communicate with each other. Bonded networks over redundant switches, and/or additional networks can be useful for increasing redundancy of the cluster's communication links.

Note: The cluster's interconnect network should be configured with a static address on every node - they dynamic nature of DHCP-assigned addresses can disrupt connectivity between nodes.

See: RHEL 7 Networking Guide

Use control-center, nm-connection-editor, nmtui, nmcli, or another preferred tool to configure the interfaces so that at least the cluster's interconnect is available and able to interact with the other hosts that will be involved in the cluster.


Define node addresses, fence-devices in /etc/hosts

Each node should have the addresses for all members of the cluster (including itself) defined in /etc/hosts. These entries should reflect the address that nodes will communicate with each other over mapped to the names by which the nodes will be listed in their cluster configuration.

Example /etc/hosts:

## cluster interconnect network
10.10.2.71 node1.example.com node1
10.10.2.72 node2.example.com node2
10.10.2.73 node3.example.com node3

## redundant interconnect network (may not apply to all deployments)
192.168.2.71 node1-alt.cluster.pvt node1-alt
192.168.2.72 node2-alt.cluster.pvt node2-alt
192.168.2.73 node3-alt.cluster.pvt node3-alt

## cluster iLO devices for fencing
10.10.2.171 node1-ilo.example.com node1-ilo
10.10.2.172 node2-ilo.example.com node2-ilo
10.10.2.173 node3-ilo.example.com node3-ilo

In this example, the names "node1.example.com", "node2.example.com", and "node3.example.com" should be the names to use when defining the cluster membership during cluster creation (outside the scope of this document, later in the cluster-setup process). If a redundant interconnect will be used, "node1-alt.cluster.pvt", "node2-alt.cluster.pvt", "node3-alt.cluster.pvt" would be the secondary names to use when defining the cluster membership.


Configure firewall for High Availability communications: firewalld is enabled by default in RHEL, and will block High Availability communications unless configured to allow them. Open ports according to your organization's needs.

# # Syntax: # firewall-cmd --add-service=high-availability --zone=<zone>
# # Example for typical deployments using bond1 interface as interconnect:
# firewall-cmd --change-interface=bond1 --zone=internal
# firewall-cmd --add-service=high-availability --zone=internal --permanent
# firewall-cmd --reload

Prepare hacluster user account: The software installation of pcs should have created an hacluster user. Set its password to something secure on every node, and remember the password in a secure manner.

# passwd hacluster

Enable pcsd throughout cluster: On every node, enable pcsd:

# systemctl enable pcsd
# systemctl start pcsd

Authorize pcs to access nodes: From any node which will be used to administer the cluster, authorize pcs to all other nodes. This is usually best to do from every node to every other node, so any system is ready to administer the cluster if the typical one is not available.

NOTE: Use the node names that are mapped in /etc/hosts to the interface(s) which the cluster will use to communicate. If there are multiple interfaces - for RRP deployments - then perform this on both names per node.

# # Run this on every node that will run `pcs` commands
# # Specify every one of the nodes, including this one
# # Syntax: # pcs cluster auth -u <user> <node> <node> [... <node>]
# 
# # Example:
# pcs cluster auth -u hacluster node1.example.com node2.example.com node3.example.com node4.example.com

Enter the password when prompted.


SBR
Article Type