What is LVM's filter setting and how do I configure it easily in RHEL?

Solution Verified - Updated

Environment

  • Red Hat Enterprise Linux (RHEL) 5, 6, 7, 8
  • Logical Volume Manager (LVM/LVM2)

Issue

  • I have many disks on my server, not all of which are assigned to LVM

  • The default LVM filter of filter = [ "a/.*/" ] and global_filter = [ "a/.*/" ] causes extra unnecessary I/O testing for LVM devices at boot.

  • LVM commands report I/O errors against passive paths of multipath devices.

  • "LVM commands report "Found duplicate" UUID detection events"

  • The pvs and pvscan commands show messages:

    WARNING: Not useing lvmetad because duplicate PVs were found.
    :
    WARNING: PV <uuid> on /dev/sdX was already found on /dev/mapper/mpatha.
    WARNING: PV <uuid> prefers device /dev/mapper/mpatha becasue device is used by LVM.
    

Resolution

Set up LVM filters within /etc/lvm/lvm.conf to avoid scanning unnecessary devices.

What is the LVM filter?

  • During boot, vgscan is run to examine the disks on a system and build a metadata map of the LVM partitions
    • This data is stored in /etc/lvm/cache/.cache in RHEL 7 and prior.
  • The disks that are examined during this process are selected based on a filter defined in lvm.conf
    • In RHEL 6, 7, and 8 this file is found at /etc/lvm/lvm.conf

How do I easily set a useful, simple filter?

Use the following 3 steps to go from the default LVM filter to a more restrictive filter:

1. Use the "simple method" script

The simple method script generates a suggested lvm filter based on the LVM devices found. As /dev/sd* and /dev/nvme* names are not necessarily persistent through reboot, the script prefers /dev/disk/by-id links. The below can be copied and pasted into the terminal as shown:

filter="" ; while read -ra line; do
    if [[ "${line[1]}" = "lvm2" ]] && [[ -n "${line[2]}" ]]; then
        if [[ "${line[0]}" == "/dev/sd"* ]] || [[ "${line[0]}" == "/dev/nvme"* ]]; then
            while read -ra line2; do
                if [[ "${line[0]}" == "$(readlink -f $line2)" ]]; then
                    filter+="\"a|$line2|\", "
                    break
                fi
            done <<< "$(ls -U /dev/disk/by-id/{scsi-[23]*,scsi-*,nvme-*} 2>/dev/null)"
        else
            filter+="\"a|${line[0]}|\", "
        fi
        $(multipath -c ${line[0]} &>/dev/null)
        [[ $? -eq 0 ]] && printf "%s %s\n" "Warning: ${line[0]} is a valid multipath device " \
                                 "path. Verify mappings and filter."
    fi
done <<< "$(pvs -a --config 'devices { global_filter = [ "a|.*|" ] filter = [ "a|.*|" ] }' \
    --noheadings -o pv_name,fmt,pv_uuid)" ; filter+="\"r|.*|\" ]" ; printf "%s\n%s\n%s\n" \
    "Suggested filter lines for /etc/lvm/lvm.conf:" "filter = [ $filter" "global_filter = [ $filter"
  • Example output:

      Suggested filter lines for /etc/lvm/lvm.conf:
      filter = [ "a|/dev/hda2|", "a|/dev/mapper/mpath0|", "a|/dev/mapper/mpath1|", "r|.*|" ]
      global_filter = [ "a|/dev/hda2|", "a|/dev/mapper/mpath0|", "a|/dev/mapper/mpath1|", "r|.*|" ]
    

2. Adjust the simple method suggested filter to meet your needs, add to lvm.conf

  • You may be able to simplify the suggested filter and combine devices, for example

      "a|/dev/mapper/mpath0|", "a|/dev/mapper/mpath1|"
    
  • May be shortened to

      "a|/dev/mapper/mpath[01]|"
    
  • You may want to add or modify the suggested filter based on expected future storage additions.

  • If you are too restrictive in the filter, future storage additions will require a change to the filter line.

    • For example, if you anticipate in the future you may add more /dev/mapper/mpath* devices, filtering on the specific device names from the 'pvs' list may be too restrictive.

    • From the output, we would build the following filter:

        filter = [ "a|/dev/hda2|", "a|/dev/mapper/mpath|", "r|.*|" ]
        global_filter = [ "a|/dev/hda2|", "a|/dev/mapper/mpath|", "r|.*|" ]
      
  • If you know the SCSI ID of the device, you can use it. If SCSI ID is 36001405a0fade84962547bca5175f033, you can use:

      filter = [ "a|36001405a0fade84962547bca5175f033|", "a|/dev/mapper/mpath|", "r|.*|" ]
      global_filter = [ "a|36001405a0fade84962547bca5175f033|", "a|/dev/mapper/mpath|", "r|.*|" ]
    

    (You will obtain the SCSI ID of a device by the command: /lib/udev/scsi_id --whitelisted /dev/DEVICE)

  • Find the following lines in /etc/lvm/lvm.conf and add (or replace if there is an existing filter) the new filter settings just after the referenced lines within the file (note: it does not make a difference where the lines are placed in the file, but for normal housekeeping sake the filter and global_filter lines are usually kept just after their default references within lvm.conf so later on they won't be inadvertently missed and result in multiple filter options being created within the configuration file.):

      # This configuration option has an automatic default value.
      # filter = [ "a|.*/|" ]
      >>> add/replace filter line here
    
      # This configuration option has an automatic default value.
      # global_filter = [ "a|.*/|" ]
      >>> add/replace global_filter line here
    

3. Clean up and test

  • Remove the existing /etc/lvm/cache/.cache file

  • Issue a vgscan, and verify the output from pvs, vgs, and lvs is correct.

    • In particular, if the root volume is on an LVM logical volume, make sure it still is displayed as this may lead to a panic on boot.
  • The filter can also be tested on the fly, without having to modify /etc/lvm/lvm.conf adding the --config argument to the LVM command. Keep in mind that this will not make permanent changes to the server's configuration. Make sure to include the working filter in /etc/lvm/lvm.conf after testing. For example:

      # lvs --config 'devices{ filter =  [ "a|/dev/hda2|", "a|/dev/mapper/mpath|", "r|.*|" ] global_filter =  [ "a|/dev/hda2|", "a|/dev/mapper/mpath|", "r|.*|" ] }'
    
  • Rebuild initramfs so that the new LVM filter gets applied to the system at the boot time.

4. Schedule a reboot

While the changes will help eliminate any new event messages around device paths being duplicates, it won't allow multipath to build the appropriate mpath maps until lvm releases access to the sdN devices it is actively using (aka are mounted). The recommended method of making that happen is to reboot the system. This will also help verify the new filter rules are correcting the original issue(s), if any.

If the system is running without the benefit of multipath devices for LVM, for example there are lots of "lvm: WARNING: found device with duplicate /dev/sdabc" being logged, then a reboot should be scheduled as reasonably as it can be to do so. The system can continue to run using the sdN, non-multipath, devices until its convenient to reboot. However, the system runs the chance of one or more of the single path sdN device(s) lvm has open and is using may fail, or access is lost. If or when that happens, this will make any lv contained on those devices inaccessible. If the lv is a key system logical volume, then the system will likely crash at that point.

Learn More


For additional documentation on LVM filters, see the following

Knowledgebase

Documentation

SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.