How to configure fence agent 'fence_xvm' in RHEL cluster
Environment
- Red Hat Enterprise Linux Server 6 (with the High Availability Add on)
- Red Hat Enterprise Linux 7 , 8 or 9 (with the High Availability Add on)
Issue
- How to configure 'fence_xvm' in RHEL cluster with rgmanager.
- How to configure stonith agent 'fence_xvm' in RHEL cluster with pacemaker.
- How to configure stonith/fencing with pcs command for cluster nodes as KVM guests.
Resolution
- Prerequisite
- RHEL 7, 8 or 9
- RHEL 6 with rgmanager
- When cluster nodes are KVM guests and are on different KVM hosts
Prerequisite
It is needed to setup fence_virtd on the KVM host so that fence_xvm can be configured on the virtual machines. fence_virtd is a host daemon designed to route fencing requests for virtual machines
On KVM physical host system use following steps to create and configure fence_virt.conf :
- Install the following packages:
# yum install fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast fence-virtd-serial
- Create fence key and copy it to all the cluster nodes:
# mkdir -p /etc/cluster
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
# scp /etc/cluster/fence_xvm.key nodeX.xvmcluster.com:/etc/cluster/
-
Open the firewall for the service:
- For RHEL 6 this can be done by running the following commands:
# iptables -I INPUT -m state --state NEW -p tcp --dport 1229 -j ACCEPT
# service iptables save ; service iptables restart
- For RHEL 7 or later ( open in zone where Virtual network is present ):
# firewall-cmd --permanent --add-port=1229/tcp --zone=libvirt
# firewall-cmd --reload
- Use
fence_virtd -ccommand to create/etc/fence_virt.conffile:
The example below lists the values to be used with above command :
# fence_virtd -c
Parsing of /etc/fence_virt.conf failed.
Start from scratch [y/N]? y
Module search path [/usr/lib64/fence-virt]:
Listener module [multicast]:
Multicast IP Address [225.0.0.12]:
Multicast IP Port [1229]:
Interface [none]: br0 <---- Interface used for communication between the cluster nodes.
Key File [/etc/cluster/fence_xvm.key]:
Backend module [libvirt]:
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "br0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
-
Start the
fence_virtdservice:- For RHEL 6
# service fence_virtd restart
# chkconfig fence_virtd on
- For RHEL 7 and later
# systemctl restart fence_virtd
# systemctl enable fence_virtd
On Virtual machines that are being used as cluster nodes follow the steps below :
- Ensure
fence-virtpackage is installed on each cluster node
root@node1 ~]# rpm -q fence-virt
fence-virt-4.10.0-18.el9.x86_64
[root@node1 ~]# # rpm -ql fence-virt
/usr/lib/.build-id
/usr/lib/.build-id/b0
/usr/lib/.build-id/b0/ef5351dad82a7329c7ff30ef27091cda73205c
/usr/sbin/fence_virt
/usr/sbin/fence_xvm
/usr/share/doc/fence-virt
/usr/share/doc/fence-virt/README
/usr/share/doc/fence-virt/TODO
/usr/share/doc/fence-virt/architecture.txt
/usr/share/doc/fence-virt/fence_virt.txt
/usr/share/man/man8/fence_virt.8.gz
/usr/share/man/man8/fence_xvm.8.gz
-
Enable the required port 1229/tcp on the hypervisor as well as each cluster node. Note: on RHEL 7 or later ensure port 1229 is open for both fence_virt and libvirt in firewalld config. Related Issue
- For RHEL 6 this can be done by running the following commands:
# iptables -I INPUT -m state --state NEW -p tcp --dport 1229 -j ACCEPT
# service iptables save ; service iptables restart
- For RHEL 7 or later:
# firewall-cmd --permanent --add-port=1229/tcp
# firewall-cmd --reload
- Using the same syntax above add port 1229 for UDP on the hypervisor.
- In order that the fencing to be successful, below command should succeed on host as well as cluster nodes. This will list all the virtual machine names (cluster nodes) that will be used in configuring
fence_xvmfurther.
[root@node1 ~]# fence_xvm -o list
RHEL6-pcs1 4f604c92-3d74-3f04-4111-08659ad56308 on
RHEL6-pcs2 bafbe890-de50-2c09-62fa-f747217d8527 on
- Ensure contents of
/etc/hostsfile from the cluster nodes are as per the article.
Configure fence_xvm fence agent.
RHEL 6 with rgmanager
Ensure everything is configured as per prerequisite
Include the fencedevice line in /etc/cluster/cluster.conf. The parameter agent will be fence_xvm and port will be whatever fence_xvm -o list returned in the first column.
Attribute one of the devices to each of the nodes in their respective clusternode definition.
If you only create one fence device (one entry in fencedevices), then include the port parameter inside clusternode definition.
The resulting fence device line in the clusternode definition would be : <device name="fencedevicename" port="vmname">
Feel free to add any parameter specific to the use of the fence device regarding the said VM. In a two-node cluster without quorum disk, you might want to make use of the delay parameter for one of the nodes (more here ).
<clusternodes>
<clusternode name="node1" nodeid="1" votes="1">
<fence>
<method name="xvmfencing">
<device delay="5" name="fencedev1"/>
</method>
</fence>
</clusternode>
<clusternode name="node2" nodeid="2" votes="1">
<fence>
<method name="xvmfencing">
<device name="fencedev2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_xvm" name="fencedev1" port="rh68a"/>
<fencedevice agent="fence_xvm" name="fencedev2" port="rh68b"/>
</fencedevices>
Increase the number of config_version in the <cluster> line by 1 and propagate changes in cluster.conf.
Use fence_check to verify the fence agent is correctly configured. -vv increases verbosity. Note that this check will only work on the fence master (typically lowest nodeid). If run on a node that is currently not the cluster's fence master, fence_check will return Unable to perform fence_check: node is not fence master. That is expected. Below is an example of check performed on a fence master.
[root@rh68a ~]# fence_check -vv
fence_check run at Mon Dec 26 16:31:14 CET 2016 pid: 4622
Checking if cman is running: running
Checking if node is quorate: quorate
Checking if node is in fence domain: yes
Checking if node is fence master: this node is fence master
Checking if real fencing is in progress: no fencing in progress
Get node list: node1 node2
Testing node1 fencing
Checking if cman is running: running
Checking if node is quorate: quorate
Checking if node is in fence domain: yes
Checking if node is fence master: this node is fence master
Checking if real fencing is in progress: no fencing in progress
Checking how many fencing methods are configured for node node1
Found 1 method(s) to test for node node1
Testing node1 method 1 status
Testing node1 method 1: success
Testing node2 fencing
Checking if cman is running: running
Checking if node is quorate: quorate
Checking if node is in fence domain: yes
Checking if node is fence master: this node is fence master
Checking if real fencing is in progress: no fencing in progress
Checking how many fencing methods are configured for node node2
Found 1 method(s) to test for node node2
Testing node2 method 1 status
Testing node2 method 1: success
cleanup: 0
[root@rh68a ~]#
RHEL 6, 7, 8 or 9 with pacemaker
Ensure everything is configured as per prerequisite
Create stonith device using command syntax below :
# pcs stonith create xvmfence fence_xvm key_file=/etc/cluster/fence_xvm.key
Note: If the virtual machine names and cluster node names are different then include pcmk_host_map parameter to map the node names to the virtual machine names.
# pcs stonith create xvmfence fence_xvm pcmk_host_map="node1.xvmcluster.com:RHEL6-pcs1 node2.xvmcluster.com:RHEL6-pcs2" key_file=/etc/cluster/fence_xvm.key
Note: For a 2-node cluster consider setting the pcmk_delay_max attribute to prevent fence race scenarios.
Overall configuration would look similar to as seen below.
[root@node1 ~]# pcs stonith show --full
Resource: xvmfence (class=stonith type=fence_xvm)
Attributes: pcmk_host_map="node1.xvmcluster.com:RHEL6-pcs1 node2.xvmcluster.com:RHEL6-pcs2" action=reboot key_file=/etc/cluster/fence_xvm.key
Operations: monitor interval=60s (xvmfence-monitor-interval-60s)
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.