How to pvmove a volume that utilizes lvmlockd shared activation.

Solution Verified - Updated

Environment

  • Red Hat Enterprise Linux 8 with Resilient Storage Add-on.

Issue

  • How do I pvmove a volume that supports a GFS2 filesystem in RHEL 8?

  • How do I pvmove a volume that utilizes lvmlockd shared activation?

  • pvmove does not work on a shared VG that utilized lvmlockd

    [root@rhel8-node-1 ~]# pvmove -b -v -n cluster_lv /dev/sda /dev/sdb
      LV locked by other host: cluster_vg/cluster_lv
      pvmove in a shared VG requires exclusive lock on named LV.
    

Resolution

NOTE: This will require an outage of any filesystem using the VG affected by the pvmove as it requires the LVM volume group to be active exclusively .

For more information on this then see the following solutions and articles:


  • Stop any filesystem resource that uses the VG that contains the desired pvmove device:

      # pcs resource disable GFS2_fs
    
  • Unmanage the LVM-Activate resource that manages activation of the VG:

      # pcs resource unmanage cluster_lvm
    
  • With the LVM-activate resource still running unmanaged, deactivate the vg on all nodes:

    [root@rhel8-node-1 ~]# vgchange -an cluster_vg 
      0 logical volume(s) in volume group "cluster_vg" now active
    [root@rhel8-node-2 ~]# vgchange -an cluster_vg 
      0 logical volume(s) in volume group "cluster_vg" now active
    
  • Then activate the vg exclusively on node node where you will perform the pvmove:

    [root@rhel8-node-1 ~]# vgchange -aey cluster_vg 
      1 logical volume(s) in volume group "cluster_vg" now active
    
  • Just as a test, the activation will not work on another node if the VG is active exclusively elsewhere:

    [root@rhel8-node-2 ~]# vgchange -ay cluster_vg 
      LV locked by other host: cluster_vg/cluster_lv
      Failed to lock logical volume cluster_vg/cluster_lv.
      0 logical volume(s) in volume group "cluster_vg" now active
    [root@rhel8-node-2 ~]# vgchange -aey cluster_vg 
      LV locked by other host: cluster_vg/cluster_lv
      Failed to lock logical volume cluster_vg/cluster_lv.
      0 logical volume(s) in volume group "cluster_vg" now active
    
  • In this example, we will move cluster_lv, which is on /dev/sda:

    [root@rhel8-node-2 ~]# lvs -o+devices
      LV         VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
      cluster_lv cluster_vg -wi-ao---- 1020.00m                                                     /dev/sda(0)   
      root       rhel       -wi-ao----  <12.50g                                                     /dev/vda2(384)
      swap       rhel       -wi-ao----    1.50g                                                     /dev/vda2(0)  
    
  • When usingpvmove the -b and -v options are optional. They will run the process in the background and provide verbose output respectively. The --atomic option ensures that all affected LVs are moved to the destination PV, or none are if the operation is aborted:

    [root@rhel8-node-1 ~]# pvmove -b -v --atomic -n /dev/cluster_vg/cluster_lv /dev/sda /dev/sdb
      Archiving volume group "cluster_vg" metadata (seqno 28).
      Creating logical volume pvmove0
      activation/volume_list configuration setting not defined: Checking only host tags for cluster_vg/cluster_lv.
      Moving 255 extents of logical volume cluster_vg/cluster_lv.
      Creating logical volume pvmove0_mimage_0
      Creating logical volume pvmove0_mimage_1
      activation/volume_list configuration setting not defined: Checking only host tags for cluster_vg/cluster_lv.
      Creating cluster_vg-pvmove0_mimage_0
      Loading table for cluster_vg-pvmove0_mimage_0 (253:3).
      Creating cluster_vg-pvmove0_mimage_1
      Loading table for cluster_vg-pvmove0_mimage_1 (253:4).
      Creating cluster_vg-pvmove0
      Loading table for cluster_vg-pvmove0 (253:5).
      Loading table for cluster_vg-cluster_lv (253:2).
      Suspending cluster_vg-cluster_lv (253:2) with device flush
      Resuming cluster_vg-pvmove0_mimage_0 (253:3).
      Resuming cluster_vg-pvmove0_mimage_1 (253:4).
      Resuming cluster_vg-pvmove0 (253:5).
      Resuming cluster_vg-cluster_lv (253:2).
      Creating volume group backup "/etc/lvm/backup/cluster_vg" (seqno 29).
      activation/volume_list configuration setting not defined: Checking only host tags for cluster_vg/pvmove0.
      Checking progress before waiting every 15 seconds.
    
  • If you opt to run the pvmove in the background, the Devices column shows pvmove while the pvmove is in progress, but then shows the proper device after the pvmove is complete:

    [root@rhel8-node-1 ~]# lvs -o+devices
      LV         VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
      cluster_lv cluster_vg -wI-a----- 1020.00m                                                     pvmove0(0)    
      root       rhel       -wi-ao----  <12.50g                                                     /dev/vda2(384)
      swap       rhel       -wi-ao----    1.50g                                                     /dev/vda2(0)  
    [root@rhel8-node-1 ~]# lvs -o+devices
      LV         VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
      cluster_lv cluster_vg -wi-a----- 1020.00m                                                     /dev/sdb(0)   
      root       rhel       -wi-ao----  <12.50g                                                     /dev/vda2(384)
      swap       rhel       -wi-ao----    1.50g                                                     /dev/vda2(0)  
    
  • Deactivate the exclusive activation:

    [root@rhel8-node-1 ~]# vgchange -an cluster_vg
      0 logical volume(s) in volume group "cluster_vg" now active
    
  • Manage and cleanup the LVM-Activate resource:

      # pcs resource manage cluster_lvm
      # pcs resource cleanup cluster_lvm
    
  • Verify that the LVM-activate resource is started on all nodes:

    [root@rhel8-node-1 ~]# pcs status --full | grep 'LVM-activate'
          * cluster_lvm	(ocf::heartbeat:LVM-activate):	 Started rhel8-node-2.clust
          * cluster_lvm	(ocf::heartbeat:LVM-activate):	 Started rhel8-node-1.clust
    
  • Re-enable gfs2 resoruce:

      [root@rhel8-node-1 ~]# pcs resource enable GFS2_fs
    
  • Verify that the filesystem is mounted:

    [root@rhel8-node-1 ~]# mount | grep gfs2
    /dev/mapper/cluster_vg-cluster_lv on /mnt/cluster type gfs2 (rw,noatime)
    [root@rhel8-node-2 ~]# mount | grep gfs2
    /dev/mapper/cluster_vg-cluster_lv on /mnt/cluster type gfs2 (rw,noatime)
    

Root Cause

A LVM shared logical volume on a LVM shared volume group needs to be moved that is managed by lvmlockd on RHEL 8.

SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.