How to convert single disk EFI boot to software RAID after installation

Solution Verified - Updated

Environment

  • Red Hat Enterprise Linux 7
  • Red Hat Enterprise Linux 8
  • EFI
  • Logical Volume Manager (LVM)
  • Software RAID (mdadm)

Issue

  • Need to convert a non-raided EFI root disk to RAID1 mirror after installation of Red Hat Enterprise Linux 7.

Resolution

Note: Red Hat does not support this procedure, nor any that involves moving /boot or /across devices is unsupported. Please ensure you take proper backups before attempting this procedure.

  • This guide assumes you are utilizing the default partition scheme and the default VG name of rhel, but the steps can be modified to match any type of configuration.
  • This guide assumes that /dev/sda is your main disk, and that it needs to be in a mirror with /dev/sdb.
  • Downtime will be required for this operation. This will cause problems if attempted in a live production environment. It is recommended that a backup of all critical data be created and verify the backup before making any changes to the underlying storage, lvm or filesystem to avoid any unforeseen problem.

1 . Gather the partition information from your main disk /dev/sda.

# parted /dev/sda u s p
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 33554432s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start     End        Size       File system  Name                  Flags
 1      2048s     411647s    409600s    fat16        EFI System Partition  boot
 2      411648s   2508799s   2097152s   xfs
 3      2508800s  33552383s  31043584s                                     lvm

2 . Using the start and end sectors, reproduce the partitioning scheme from the previous command on the new unused disk.

# parted /dev/sdb mklabel gpt
# parted /dev/sdb mkpart primary 2048s 411467s      
# parted /dev/sdb mkpart primary 411468s 2508799s
# parted /dev/sdb mkpart primary 2508800s 33552383s

3 . Add the RAID flag on all partitions that will be mirrored.

# parted /dev/sda set 1 raid on
# parted /dev/sda set 2 raid on
# parted /dev/sda set 3 raid on
# parted /dev/sdb set 1 raid on
# parted /dev/sdb set 2 raid on
# parted /dev/sdb set 3 raid on

4 . Create a degraded RAID device on the first partition of the new disk. This will be used for your efi bootloader partition (/boot/efi). NOTE: Use --metadata=1.0 option to store /boot/efi on this device, otherwise the bootloader will not be able to read the metadata.

# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 --metadata=1.0

5 . Create file-system same as existing on /dev/sda1 on the new degraded RAID array /dev/md0. Since the first partition is the efi bootloader, we will need to set is to vfat (fat16)

# mkfs.vfat /dev/md0 

6 . Create a degraded RAID device for the second partition of the new disk. This will be used for your boot partition (/boot). NOTE: metadata on this partition will also need to be --metadata=1.0

# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2 --metadata=1.0

7 . Create filesystem the same as the one on /dev/sda2 on the new degraded RAID array /dev/md1. In this case we will format as XFS.

# mkfs.xfs /dev/md1

8 . Copy data from /boot and /boot/efi to the RAID device

# mkdir /mnt/temp_boot
# mount /dev/md1 /mnt/temp_boot
# mkdir /mnt/temp_boot/efi
# mount /dev/md0 /mnt/temp_boot/efi
# rsync -a /boot/ /mnt/temp_boot
# rsync -a /boot/efi/ /mnt/temp_boot/efi
# umount /mnt/temp_boot/efi
# umount /mnt/temp_boot

9 . Create a degraded RAID device for the root device. This will be used for the LVM root partition. NOTE: default mdadm metadata is version 1.2 and the reserved metadata area is dependent on the RAID array size. To be safe and as to not cause an issue with LVM migration later, we will be using --metadata=1.0 instead, which places the metadata area a the end of the disk

# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3 --metadata=1.0

10 . Prepare the root RAID device for LVM

# pvcreate /dev/md2
# vgextend rhel /dev/md2

11 . Perform pvmove to move LV from the old device to the RAID device. NOTE: The pvmove process may take a while depending on the size of the LV. If this was done during production, it may take longer and can affect overall performance of the system

# pvmove --atomic /dev/sda3 /dev/md2

12 . Remove the old partition from the volume group and LVM stack.

# vgreduce rhel /dev/sda3
# pvremove /dev/sda3 

13 . Unmount the current boot partitions, and mount the new RAID volumes there.

# umount /boot/efi
# umount /boot
# mount /dev/md1 /boot
# mount /dev/md0 /boot/efi

13a. Add the old disks to the new arrays to complete the mirror.

# mdadm /dev/md0 -a /dev/sda1
# mdadm /dev/md1 -a /dev/sda2
# mdadm /dev/md2 -a /dev/sda3

13b . Monitor the RAID status and wait for the recovery to complete. Example:

# mdadm -D /dev/md2
/dev/md2:
           Version : 1.0
     Creation Time : Mon Jun  3 21:36:11 2019
        Raid Level : raid1
        Array Size : 15521664 (14.80 GiB 15.89 GB)
     Used Dev Size : 15521664 (14.80 GiB 15.89 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Mon Jun  3 22:25:13 2019
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 16% complete

              Name : rhel76efi:2  (local to host rhel76efi)
              UUID : 481893c5:715a259d:b4cc0038:6fae95d5
            Events : 200

    Number   Major   Minor   RaidDevice State
       2       8        3        0      spare rebuilding   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

14 . Update the /etc/fstab file with the new location for boot.
14a . Determine the UUID for the boot devices

# blkid | grep md
/dev/md0: SEC_TYPE="msdos" UUID="5371-07CF" TYPE="vfat" 
/dev/md1: UUID="d7c0f3e4-9d38-4a21-8321-f8fd8ed09510" TYPE="xfs" 
/dev/md2: UUID="BLXl8W-p1Ww-OO3L-nQVd-rGX5-dJ7h-rCFR9k" TYPE="LVM2_member" 

14b . Updated /etc/fstab to reflec the new devices

# grep boot /etc/fstab
#UUID=4be87368-4d97-4225-b258-d4277588deda /boot                   xfs     defaults        0 0        <<<< This line is not needed anymore, you may remove if you like
#UUID=16AC-FAE5          /boot/efi               vfat    umask=0077,shortname=winnt 0 0        <<<< This line is not needed anymore, you may remove if you like
UUID=d7c0f3e4-9d38-4a21-8321-f8fd8ed09510 /boot                   xfs     defaults        0 0
UUID=5371-07CF         /boot/efi               vfat    umask=0077,shortname=winnt 0 0

15 . At this moment there are many known issues with lvmetad cache existing in RHEL7. So it's better to disable lvmetad cache.

  • Modify use_lvmetad as below in lvm.conf
	use_lvmetad = 1
to
	use_lvmetad = 0
  • Stop lvm2-lvmetad service.
# systemctl disable lvm2-lvmetad.service --now
# systemctl disable lvm2-lvmetad.socket --now
# systemctl mask lvm2-lvmetad.socket

16 . Scan mdadm metadata and append RAID information to /etc/mdadm.conf.

# mdadm --examine --scan > /etc/mdadm.conf
# echo "MAILADDR root@$(hostname)" >> /etc/mdadm.conf

17 . Update /etc/default/grub with MD device UUID.
17a . Find the md UUID

# mdadm -D /dev/md{0..2} | grep UUID
              UUID : 73c07e24:517ff9ab:7fc2af66:b8e9a95d
              UUID : d377f0cc:06253a7f:f51b433e:b313a215
              UUID : ea64003d:1c491146:63c01376:33dc5f53

17b . Modify the grub kernel command line to include the md UUIDs in /etc/default/grub

GRUB_CMDLINE_LINUX="rd.md.uuid=73c07e24:517ff9ab:7fc2af66:b8e9a95d rd.md.uuid=d377f0cc:06253a7f:f51b433e:b313a215 rd.md.uuid=ea64003d:1c491146:63c01376:33dc5f53  rd.lvm.lv=rhel/root crashkernel=auto  rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet"

18 . Update grub2.cfg.

RHEL7

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 

RHEL8

# grub2-editenv - set "$(grub2-editenv - list | grep kernelopts) <NEW_PARAMETER>"

OR
Example:

# grub2-editenv - set "kernelopts=root=/dev/mapper/rhel-root rd.md.uuid=73c07e24:517ff9ab:7fc2af66:b8e9a95d rd.md.uuid=d377f0cc:06253a7f:f51b433e:b313a215 rd.md.uuid=ea64003d:1c491146:63c01376:33dc5f53  rd.lvm.lv=rhel/root crashkernel=auto  rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet"

How do I permanently modify the kernel command line in RHEL 8?

19 . Update EFI bootmgr
19a . Remove old EFI boot entry

# efibootmgr -v | grep Linux
Boot0005* Red Hat Enterprise Linux 7	HD(1,GPT,66701419-ed69-4284-8821-b1cc42996dfe,0x800,0x64000)/File(\EFI\redhat\shimx64.efi)
Boot0006* Red Hat Enterprise Linux	HD(1,GPT,bfd569f0-a697-4a48-b1fd-b509cd89528b,0x800,0x64000)/File(\EFI\redhat\shimx64.efi)
# efibootmgr -b 5 -B
# efibootmgr -b 6 -B

19b . Add new entry to EFI bootmgr. grub does not support booting from a mdadm device, so both EFI partitions will need to be added to the bootmgr

# efibootmgr -c -d /dev/sda -p1 -l \\EFI\\redhat\\shimx64.efi -L "Red Hat Enterprise Linux"
# efibootmgr -c -d /dev/sdb -p1 -l \\EFI\\redhat\\shimx64.efi -L "Red Hat Enterprise Linux"
# efibootmgr -v
BootCurrent: 0006
BootOrder: 0005,0000,0001,0002,0003,0004
Boot0000* EFI Virtual disk (0.0)	/Pci(0x15,0x0)/Pci(0x0,0x0)/SCSI(0,0)
Boot0001* EFI Virtual disk (1.0)	/Pci(0x15,0x0)/Pci(0x0,0x0)/SCSI(1,0)
Boot0002* EFI VMware Virtual IDE CDROM Drive (IDE 1:0)	/Pci(0x7,0x1)/Ata(1,0,0)
Boot0003* EFI Network	/Pci(0x16,0x0)/Pci(0x0,0x0)/MAC(000c29b1128f,1)
Boot0004* EFI Internal Shell (Unsupported option)	MemoryMapped(11,0xe1a3000,0xe42ffff)/FvFile(c57ad6b7-0515-40a8-9d21-551652854e37)
Boot0005* Red Hat Enterprise Linux	HD(1,GPT,bfd569f0-a697-4a48-b1fd-b509cd89528b,0x800,0x64000)/File(\EFI\redhat\shimx64.efi)
Boot0006* Red Hat Enterprise Linux	HD(1,GPT,45d1fb56-7ae4-4cdf-a0ee-2455f17cf4ea,0x800,0x64000)/File(\EFI\redhat\shimx64.efi)

20 . Rebuild initramfs image with mdadmconf.
It is recommended you make a backup copy of the initramfs in case the new version has an unexpected problem:

$ cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
# dracut -f -v --mdadmconf

21 . Reboot the machine to make sure everything is correctly utilizing the new software RAID devices.

SBR
Components

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.