Common Cluster Administrative Commands in Red Hat Enterprise Linux 6 and 7
Updated
CLUSTER INSTALLATION AND SETUP
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| Available configuration tools | luci web UI ccs command | pcs command pcs web UI (RHEL 7 only) |
| Configuration files | cluster.conf Can be edited directly | corosync.conf cib.xml Do not edit cib.xml file directly |
| Installation | On all nodes: yum groupinstall 'High Availability' 'Resilient Storage' On node that runs luci: yum install luci If using clustered file systems: yum install lvm2-cluster gfs2-utils | RHEL 7 On all nodes: yum install pcs If using clustered file systems: yum install lvm2-cluster gfs2-utils RHEL 6 On all nodes: yum install pacemaker cman pcs chkconfig corosync off If using clustered file systems: yum install lvm2-cluster gfs2-utils |
| Starting and enabling cluster services | On all nodes: service ricci start chkconfig ricci on On the node that runs luci: service luci start chkconfig luci on | On all nodes: systemctl start pcsd.service systemctl enable pcsd.service |
| Authentication | On all nodes: passwd ricci Authorization done on first connection from ccs to ricci | On all nodes: passwd hacluster On one node: pcs cluster auth clusternode1 clusternode2... |
CLUSTER CREATION
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| Create cluster | ccs -h host --createcluster clustername | pcs cluster setup [--start] --name clustername node1 node2 ... |
| Start cluster | ccs -h host --startall | Start on one node: pcs cluster start Start on all cluster members: pcs cluster start --all |
| Enable cluster | Automatic with --startall To prevent automatic enable, use: ccs -h host --startall --noenable | pcs cluster --enable --all |
| Stop cluster | ccs -h host --stopall | Stop one node: pcs cluster stop Stop on all cluster members: pcs cluster stop --all |
| Add node to cluster | ccs -h host --addnode node | pcs cluster node add node |
| Remove node from cluster | ccs -h host --rmnode node | pcs cluster node remove node |
| Show configured cluster nodes | ccs -h host --lsnodes | pcs cluster status |
| Show cluster configuration | ccs -h host --getconf | pcs config show |
| Sync cluster configuration | ccs -h host --sync --activate | Propagation is automatic on configuration for pacemaker cib.xml file To sync corosync.conf in RHEL 7 or cluster.conf in RHEL 6: pcs cluster sync |
FENCING
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| Show fence agents | ccs -h host --lsfenceopts | pcs stonith list |
| Show fence agent options | ccs -h host --lsfencopts agentname | pcs stonith describe fenceagent |
| Create fence device | Create fence device: ccs -h host --addfencedev device_name agent=fenceagent agent_options Create fence method: ccs -h host --addmethod method node Add fence instance to method: ccs -h host --adfenceinst device_name node method options | pcs stonith create stonith_id stonith_device_type [stonith_device_options] |
| Configure backup fence device | Add second fence method to node | pcs stonith level add level node devices |
| Remove a fence device | Remove fence device: ccs -h host --rmfencedev device_name Remove fence method: ccs -h host --rmmethod method node Remove all fence instances from a method: ccs -h host --rmfenceinst device_name node method | pcs stonith delete stonith_id |
| Modify a fence device | Remove fence device then create it again with modified attributes. | pcs stonith update stonith_id stonith_device_options |
| Display configured fence devices | ccs -h host --lsfencedev | pcs stonith show |
CLUSTER RESOURCES AND RESOURCE GROUPS
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| List available resource agents | ccs -h host --lsresourceopt | pcs resource list |
| List options for a specific resource | ccs -h host --lsserviceopts resourcetype | pcs resource describe resourcetype |
| Create resource | global resource ccs -h host addresource resourcetype resource_options resource local to a service Add resource to service group as subservice: ccs -h host --addsubservice servicename subservice service_options | pcs resource create resource_id resourcetype resource_options |
| Create resource groups | Create a service group, then add resources as subservices: ccs -h host --addservice servicename service_options ccs -h host --addsubservice servicename subservice service_options | pcs resource group add group_name resource_id1 [resource_id2] [...] or create group on resource creation pcs resource create resource_id resourcetype resource_options --group group_name |
| Create resource to run on all nodes | Create separate resource for each node; no actual "clone". | pcs resource create resource_id resourcetype resource_options --clone clone_options |
| Display configured resources | ccs -h host --getconf | pcs resource show [--full] |
| Configure resource contraints | Create failover domain ccs -h host --addfailoverdomain failover_domain_name Add node to failover domain ccs -h host –addfailoverdomainnode failover_domain_name node priority | Order constraints (if not using service groups) pcs constraint order [action] resource_id1 then [action] resource_id2 Location constraints pcs constraint location rsc prefers node[=score] pcs constraint location rsc avoids node[=score] Colocation constraints pcs constraint colocation add source_resource with target_resource [=score] |
| Modify a resource option | Remove resource and reconfigure with modified options | pcs resource update resource_id resource_options |
WEB UI
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| URL | https://node:8084 | https://node:2224 |
| Login | Login as root or as authorized luci user | Login as user hacluster or authorized pcsd user |
| Required services on node running Web UI | luci, ricci | pcsd |
| Authentication | Authorization done on first connection from ccs to ricci | pcs cluster auth node1 node2 ... |
TROUBLESHOOTING
| Task | rgmanager (RHEL 6) | Pacemaker (RHEL 6 and RHEL 7) |
|---|---|---|
| Display current cluster and resource status | clustat | pcs status |
| Display current version information | clustat -v cman_tool version | pcs status |
| Stop and disable a cluster element | clusvcadm -d service_name Stop service until member transition or enablement: clusvcadm -s service_name | pcs resource disable resource |
| Enable a cluster element | clusvcadm -e service_name | pcs resource enable resource |
| Freeze a cluster element (prevent status check) | clusvcadm -Z service_name | pcs resource unmanage resource |
| Unfreeze a cluster element (resume status check) | clusvcadm -U service_name | pcs resource manage resource |
| Disable cluster resource management | Disable Pacemaker resource management: pcs property set maintenance-mode=true Re-enable Pacemaker resource management: pcs property set maintenance-mode=false | |
| Disable single node | ccs -h node --stop | pcs cluster standby node |
| Re-enable node | ccs -h node --start | pcs cluster unstandby node |
| Move cluster element to another node | Move service to another node: clusvcadm -r service_name -m nodename Relocate virtual machine resources: clusvcadm -M vm_resource | pcs resource move groupname move_to_nodename (move_to_nodename remains preferred node until "pcs resource clear" is executed) |
Product(s)
Tags
Article Type