How can I make my highly available resources dependent upon clone resources in RHEL 7 and RHEL 8 with pacemaker?
Environment
- Red Hat Enterprise Linux (RHEL) 7, 8 with the High Availability Add On
pacemaker- One or more resources that must run on multiple nodes in the cluster simultaneously
Issue
- I had to create some clone resources to be started throughout the cluster, and I want to make sure my resources wait for it to start before they themselves start.
- I have
controldandclvmresources starting throughout my cluster, as well as a GFS2 file system, and then have a resource group for my application. How should I configure the GFS2 so that it starts throughout the cluster, but the application is dependent on it and waits for it to start? - Can clone resources be a part of a resource group?
Resolution
Clone resources in a High Availability pacemaker cluster are those that can run on multiple nodes, usually on all of them, simultaneously. This can be useful for starting daemons like dlm_controld (via a controld resource), or clvmd and cmirrord (via a clvm resource), that are needed by other highly available or load-balanced resources.
However, clones cannot be members of resource groups. As such, special considerations may be needed to create dependencies between resources and these clones that might have otherwise been achieved automatically through a resource group. There are a few different strategies for configuring such dependencies amongst managed resources.
Before creating the resources make sure that no-quorum-policy is set to freeze as shown in the following documented: Chapter 5. Configuring a GFS2 File System in a Cluster Red Hat Enterprise Linux 7
# pcs property set no-quorum-policy=freeze
Resource Groups with Dependencies on Clones
The most common method for enabling some resources to run throughout the cluster and for others to be managed as a group in a highly available manner is to create clone sets for those resources to run on multiple nodes, create a resource group for the related set of resources, and then create constraints tying them together.
For example, if a highly available web server were to run on top of GFS2, which requires clvm and DLM, then one might create controld, clvm, and Filesystem clone resources, then a resource group for the apache and IPaddr2, with ordering and location constraints tying them together.
Example for RHEL7
The first step is creating the clone resources:
# pcs resource create dlm ocf:pacemaker:controld on-fail=fence clone interleave=true ordered=true
# pcs resource create clvmd ocf:heartbeat:clvm on-fail=fence clone interleave=true ordered=true
# pcs resource create gfs2 ocf:heartbeat:Filesystem device=/dev/cluster_vg/cluster_lv directory=/gfs2share fstype=gfs2 op monitor interval=10s on-fail=fence clone interleave=true
These need constraints to start in the proper order and run on the same node:
# pcs constraint order start dlm-clone then clvmd-clone
# pcs constraint colocation add clvmd-clone with dlm-clone
# pcs constraint order start clvmd-clone then gfs2-clone
# pcs constraint colocation add gfs2-clone with clvmd-clone
And now a resource group for the web server and IP are needed:
# pcs resource create web-IP IPaddr2 ip=192.168.2.5 cidr_netmask=24
# pcs resource create httpd ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf
# pcs resource group add website web-IP httpd
And finally, this group depends on the GFS2 file system, so constraints are added for ordering and colocation:
# pcs constraint order start gfs2-clone then website
# pcs constraint colocation add website with gfs2-clone
Individual Resources Dependent Upon Clones
A similar strategy as the above section may be implemented without groups, if preferred. That is, creation of the `controld`, `clvm`, and `Filesystem` resources and their related constraints would be identical, as well as the creation of the dependent resources, but they simply would not be added to a group. To enforce the dependency, constraints could be created between the specific resources, such as between the `apache` and `IPaddr2` resources, and between the `apache` and `Filesystem` resources.
# pcs constraint order start web-IP then httpd
# pcs constraint colocation add website with web-IP
# pcs constraint order start gfs2-clone then httpd
# pcs constraint colocation add website with gfs2-clone
This method can be as effective as the other; it is a matter of preference and convenience as to whether resource group are utilized.
Resource Group Clones
In some cases, it may be preferable to create the entire set of resources that run on all nodes as a single group, and then simply clone that. This achieves the ordering and colocation constraints between those resources through the use of a group, and effectively runs these resources throughout the cluster via a group clone. Other group can then depend on that group if needed.
Using the same example scenario as the above sections:
# pcs resource create dlm ocf:pacemaker:controld on-fail=fence
# pcs resource create clvmd ocf:heartbeat:clvm on-fail=fence
# pcs resource create gfs2 Filesystem device=/dev/cluster_vg/cluster_lv directory=/gfs2share fstype=gfs2 op monitor interval=10s on-fail=fence
# pcs resource group add storage-services dlm clvmd gfs2
# pcs resource clone storage-services
Now if other groups must depend on those services, constraints can be created to enforce the dependency:
# pcs constraint order start storage-services then website
# pcs constraint colocation add website with storage-services
For a RHEL 8 example then see the following: Chapter 7. GFS2 file systems in a cluster Red Hat Enterprise Linux 8
For a more detailed example then see the following:
- Chapter 5. Configuring a GFS2 File System in a Cluster Red Hat Enterprise Linux 7
- Chapter 7. GFS2 file systems in a cluster Red Hat Enterprise Linux 8
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.