Cluster Administrative Comparison for Veritas InfoScale Availability and Red Hat Enterprise Linux with Pacemaker
This article provides an introduction to Pacemaker administration for users familiar with Veritas InfoScale Availability (formerly Veritas Cluster Server). It includes a brief comparative overview of some of the administration components of each system, and then provides a series of tables summarizing the common administrative commands used for creating and managing a Veritas InfoScale Availability Cluster and a Red Hat High Availability Add-On Cluster in Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8 using Pacemaker.
COMPONENT COMPARISON
| Component | Veritas InfoScale Availability | Pacemaker in RHEL 7 and RHEL 8 |
|---|---|---|
| Cluster communication | * Provided by LLT + GAB (Low latency transport + Global Atomic Broadcast) * Can utilize multiple interfaces, either dedicated/private or low-priority/public, where traffic is semi-load balanced by passing data over each interface in sequence. * Target environment has two private dedicated non-routable network links | Provided by corosync, which uses the existing network interface to communicate with cluster nodes |
| Installer | Command line tool to add/remove software packages on the OS, set up the base cluster, set up cluster notification, and generate fencing configuration | * Installation through standard Red Hat installation commands * Pacemaker commands to set up the base cluster, cluster notifications, and fencing configuration |
| Resources | * Resources are specific instances of an agent * Each agent has attributes specific to the type of agent used | * Values for resource options are set for each resource at configuration * Enabled and brought online automatically by resource agent |
| Resource order and location | Configured as resource dependencies | Configured as constraints |
| Multi-node resources | Configured as parallel resources, or hybrid resources for multi-cluster. | Configured as clone resources |
| Parent/child resources | Configured as parent/child in resource dependency | Configured as master/slave |
| Resource management | Service groups (SG)s, a collection of resources that move within the cluster together | Resource groups, administered as collection of resources with location and order constraints |
| Fence configuration | * SCSI3 (disk-based) fencing, based on SCSI3-PR keys * Non-SCSI3 Fencing using Coordination Point Servers (CPS) for network-based fencing * Priority Fencing, which uses a weight condition to determine which server/sub-cluster should fail if a fencing event occurs. | * Fencing required for supported cluster * Fencing agents configured as fence resources * I/O Fencing (storage fencing) * Power fencing * Watchdog fencing |
| Quorum | No quorum if fencing is not used. | Advanced quorum options with no external node |
| Web GUI | * Customers laptop Java based GUI, which includes simulator support * Large central management GUI * Tree architecture display | pcsd Web UI |
CLUSTER INSTALLATION AND SETUP
| Task | Veritas InfoScale Availability | Pacemaker in RHEL 7 and RHEL 8 |
|---|---|---|
| Available configuration tools | * Command line * Veritas InfoScale Operations Manager | * pcs command * pcs web UI |
| Configuration files | * /etc/llthosts, /etc/lltlab, /etc/gabtab, /etc/VRTSvcs/conf/config/main.cf , /etc/VRTSvcs/conf/config/Agenttypes.cf * Can be edited directly, with a process to enable changes while the cluster is online or with command line or GUI | * /etc/corosync/corosync.conf * CIB (editable through pcs commands) * /etc/sysconfig/pacemaker |
| Cluster daemons | had, hashadow, resourceAgent, CmdServer | pacemakerd, crmd, lrmd, pengine, cib, attrd, stonith-ng |
| Log files | * /var/VRTSvcs/log * /var/VRTvcs/log/engine_A.log | * /var/log/messages * /var/log/cluster/corosync.log |
| Authentication | Enables integration with NIS/AD or localized cluster user/passwords | * passwd hacluster * pcs cluster auth clusternode1 clusternode2... |
CLUSTER CREATION AND ADMINISTRATION
| Task | Veritas InfoScale Availability | Pacemaker in RHEL 7 and RHEL 8 |
|---|---|---|
| Create cluster | hasys -add sys | pcs cluster setup |
| Start cluster | hastart <-local> | pcs cluster start |
| Enable cluster | pcs cluster --enable --all | |
| Stop cluster | * Stop one node: hastop -local * Stop all cluster nodes: hastop -all | * Stop one node: pcs cluster stop * Stop on all cluster members: pcs cluster stop --all |
| Add node to cluster | hasys -add | pcs cluster node add |
| Remove node from cluster | hasys -delete | pcs cluster node remove |
| Show configured cluster nodes | hasys -list | pcs cluster status |
| Show cluster configuration | hasys -display | pcs config show |
| View cluster status | hastatus | pcs status |
| Sync cluster configuration | Automatic | * Automatic for pacemaker cib.xml file * To sync corosync.conf: pcs cluster sync |
| Modify cluster from CLI | * Open the configuration: haconf -makerw * Run the commands * Close the configuration: haconf -dump -makero | * For dynamic configuration, use pcs commands directly * For static configuration, use pcs -f *filename* to save configuration updates to a file to update as a group at a later time with pcs cluster cib-push *filename* * Disable individual agents to modify parameters without stopping the cluster * Cluster must be restarted for clusterwide options to take affect |
CLUSTER RESOURCES AND RESOURCE GROUPS
| Task | Veritas InfoScale Availability | Pacemaker in RHEL 7 and RHEL 8 |
|---|---|---|
| List available resource agents | hatype -list | pcs resource list |
| Configure resource agent | hatype -modify | * Installed as part of package * Specified on resource creation |
| Create resource | hares -add | pcs resource create |
| Remove resource | hares -delete | pcs resource delete |
| Modify resource | hares -modify | pcs resource update |
| Create service/resource group | hagrp -add | * pcs resource group add * Or create on resource creation: pcs resource create ... --group |
| Display service/resource groups | hagrp -list | pcs resource group list |
| Modify service/resource group | hagrp -modify | * pcs resource group add * pcs resource group remove |
| Remove service/resource group | hagrp -delete | pcs resource delete |
| Create resource to run on all nodes | Create as part of parallel or hybrid service group | pcs resource create ... --clone |
| Create master/slave resource | hares -modify | pcs resource create ... --master |
| Display configured resources | hares -list | pcs resource show [--full] |
| Modify a resource option | hares -modify hagrp -modify | pcs resource update |
| Create dependencies | hagrp -link hares -link | * Order constraints (if not using service groups): pcs constraint order * Location constraints: pcs constraint location * Colocation constraints: pcs constraint colocation |
| Show dependencies | hagrp -dep | pcs constraint list --full |
| Remove dependencies | hares -unlink | pcs contraint ... remove |
| Enable a service/resource group | hagrp -enable | pcs resource enable |
| Disable a service/resource group | hagrp -disable | pcs resource disable |
| Enable resources | hagrp -enableresources | pcs resource enable |
| Disable resources | hagrp -disableresources | pcs resource disable |
TROUBLESHOOTING
| Task | Veritas InfoScale Availability | Pacemaker in RHEL 7 and RHEL 8 |
|---|---|---|
| Display current cluster and resource status | hastatus -sum | pcs status |
| Stop the cluster while keeping the apps online | hastop -all -force | pcs property set maintenance-mode=true |
| Stop and disable a service group | hagrp -offline hares -offline | pcs resource disable |
| Enable a service group | hagrp -online hares -online | pcs resource enable |
| Freeze a service group | hagrp -freeze | pcs resource unmanage |
| Unfreeze a service group | hagrp -unfreeze | pcs resource manage |
| Disable single node | hasys -freeze | pcs cluster standby |
| Re-enable node | hasys -unfreeze | pcs cluster unstandby |
| Move a service group to another node | hagrp -switch | pcs resource move |
| Clear the status of a resource | hares -clear or hagrp -clear | pcs resource cleanup </br /> pcs resource refresh |
The content of this article as it relates to Veritas InfoScale Availability Cluster is provided under "Best Effort" circumstances. Red Hat does not support or document Veritas InfoScale Availability Cluster, and the accuracy of some commands and abilities may be incorrect or inaccurate. If an issue with this document is identified, please contact Red Hat Support or leave a comment below so we can investigate