How do I configure a fence_rhevm stonith device in a Red Hat High Availability cluster?

Solution Verified - Updated

Environment

  • Red Hat Enterprise Linux 6, 7, 8 or 9 (with the High Availability Add-on)
  • Pacemaker
  • One or more RHEV guests as cluster nodes.

Issue

  • How do I configure fence_rhevm?

Resolution

To test the fence_rhevm agent, use the fence_rhevm command as follows, replacing values in angle brackets as appropriate. This command will query the VM's power status and will not reboot the node. Run the command against each VM in the cluster.

# fence_rhevm -o status --ssl --ssl-insecure -a <rhev_manager_ip> --username=<user@domain>  --password=<password> -n <vm_name> [--disable-http-filter]

For RHEV 3.0, the --ipport 8443 option may also be required.

Notes on the parameters:

  • <user> - This is the username of a RHEV user with permission to power on and off the RHEV virtual guests that are members of the cluster.
  • <domain> - This is the authentication domain associated with that user. This value is required, and it corresponds to the domain selected when that users signs in to the RHEV user interface.
  • <vm_name> - This is the cluster node's virtual machine name in RHEV.
  • --disable-http-filter - This option disables filtering of machines to which this user does not have explicit UserRole permissions. With this filter enabled, an administrator without explicit UserRole permissions will not see any machine. See Solution 3093891 for details.

If the tests are successful, create the stonith device.

# pcs stonith create rhev_fence fence_rhevm ipaddr=<RHEV_Manager_IP/hostname> ssl_insecure=1 ssl=1 login='<rhv_fencing_user@domain_name>' passwd=<password> pcmk_host_map='<pacemaker_node_name1>:<vm_name1>;<pacemaker_node_name2>:<vm_name2>' power_wait=3

For more information on the correct format for pcmk_host_map, refer to Solution 2619961.

NOTE: fence_rhevm is a shared fence device, meaning that it can manage multiple virtual environments at the same time, as the RHEV-M host manages virtual machines across multiple hypervisors. While it's normally only required that a single stonith resource be configured in this case, there are some circumstances where it may be necessary to create multiple individual stonith resources, all utilizing the same shared device, to implement specific configuration profiles for each individual node.

SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.