How do I convert a single path boot from SAN RHEL system to multipath in RHEL 5?

Solution Unverified - Updated

Environment

  • Red Hat Enterprise Linux 5

Issue

  • There is an installed system that was recently upgraded with multiple HBAs. We now have multipath storage available and would like to reconfigure the system to take advantage of the additional paths. How can this be done?

  • The system was configured to boot from SAN. During OS installation root lvm/filesystem were created using sub paths - /dev/sdxx. Is it possible to migrate from single path /dev/sdxx to device-mapper-multipath devices viz. /dev/mapper/mpathX after OS installation??

      $ lvs_-a_-o_devices
        LV          VG     Attr   LSize      Devices                                        
        LogVol00    rootvg -wi-ao   6.00G    /dev/sdbt2(0)                     <--- root lvm volume created using underlying sub path                             
        LogVol01    rootvg -wi-ao  10.00G    /dev/sdbt2(352)                                
        LogVol02    rootvg -wi-ao   5.00G    /dev/sdbt2(674)
    

Resolution

The only supported method for correcting this issue is to reinstall the system, choosing a multipath installation in the installer.

Installing manually with multipath support is covered at: How do I install Red Hat Enterprise Linux 5 to boot from SAN using device-mapper-multipath?

Installing with kickstart with multipath support is covered at: How do I configure multipath in a RHEL 5 kickstart installation?

Instructions for converting RHEL6 systems can be found here.

Unsupported Workaround

The following information has been provided by Red Hat, but is outside the scope of the posted Service Level Agreements and support procedures. The information is provided as-is and any configuration settings or installed applications made from the information in this article could make the Operating System unsupported by Red Hat Global Support Services. The intent of this article is to provide information to accomplish the system's needs. Use of the information in this article is at the user's own risk.

There is a process that can potentially workaround a reinstall: Use rescue mode with the mpath option to generate a new initrd. The steps are:

  • Reboot into rescue mode using the mpath boot option.
boot> linux rescue mpath
  • When asked to Detect your current installation, choose 'Continue'

  • If asked to initialize any disk at this point, choose 'No'

  • Make sure all of the devices needed for root/boot are multipathed. You should be able to tell with the pvs and/or mount commands.

# lvm pvscan --config 'devices{ filter = [ "a/mapper/", "r/.*/" ] }'
# lvm vgscan --config 'devices{ filter = [ "a/mapper/", "r/.*/" ] }'
# lvm lvscan --config 'devices{ filter = [ "a/mapper/", "r/.*/" ] }'
  • Change root
chroot /mnt/sysimage
  • Edit and/or create a file named /etc/sysconfig/mkinitrd/multipath with the following contents:
MULTIPATH=yes
  • Change /etc/fstab to point to your new devices. In most cases this will only apply for /boot, since root and data devices are stored on LVM Logical Volumes and thus will not change. For example, you may have an entry for /dev/sda1 on /boot in /etc/fstab, so you will need to change that to /dev/mapper/mpath0p1. You should be able to tell what multipath device /boot resides on in the output of the mount command.

  • If you are using LVM and have a filter in /etc/lvm/lvm.conf that only includes certain devices, make sure it includes /dev/mapper/mpath devices ordered first, ie:

filter = [ "a|/dev/mapper/mpath.*|", "a|/dev/sd*|", "r|.*|" ]
  • Ensure the blacklist{} stanza in /etc/multipath.conf is not blacklisting your root device from being multipathed.

  • If /var is a separate mount of /, move the bindings file to /etc:

# mkdir /etc/multipath
# mv /var/lib/multipath/bindings /etc/multipath
  • Edit /etc/multipath.conf and specify the new bindings file location:
defaults { 
  user_friendly_names  yes
  bindings_file        "/etc/multipath/bindings"
}
  • Verify the contents within /var/lib/multipath/bindings or /etc/multipath/bindings file appropriately maps your multipath devices (WWID) to friendly names like mpath0. Since the entry you just put in /etc/fstab corresponds to the current mappings in rescue mode, you want that same mapping to be used when you boot and rc.sysinit attempts to mount that entry. For instance, if rescue mode called your boot device mpath0 the bindings file format would be:
# <map name> <wwid>
mpath0 1234567890
  • Rebuild the initrd image using steps described in following article and reboot the system with newly created initrd image:
    How to rebuild the initial ramdisk image in Red Hat Enterprise Linux

    NOTE: When using mkinitrd prior to the RHEL 5 Update 6 release, an additional step is needed. The init file within the initrd contains the command multipath -v0 $wwid, and that WWID must reflect that of the root device. In later updates, the WWID was removed, but for earlier releases, the initrd must be unpacked, init modified, and the initrd rerolled.

  • Create an entry in /boot/grub/grub.conf to point to the new initrd. You may choose to create a separate entry rather than overwriting the old entry, so you can easily boot into the old initrd if there's any problem.

  • Reboot into the new entry added in the previous step in grub.conf.

SBR

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.