How do I convert my root disk to RAID1 after installation of Red Hat Enterprise Linux 9 and 10?

Solution Verified - Updated

Environment

  • Red Hat Enterprise Linux (RHEL) 9, 10
  • Logical Volume Manager (LVM)
  • Software Raid (mdadm)

Issue

  • Need to convert a non-raided root disk to mdadm RAID1 mirror after installation of Red Hat Enterprise Linux 9.

Resolution

  • This guide assumes you are utilizing the default partition scheme, but the steps can be modified to match any type of configuration.
  • This guide assumes that /dev/sda is your main disk, and that it needs to be in a mirror with /dev/sdb.
  • Downtime will be required for this operation. This will cause problems if attempted in a live production environment. It would be recommended to please backup the critical data and verify the backup before making any changes to the underlying storage, lvm or filesystem to avoid any unforeseen problems.
  • For instruction on converting a PPC based system, please see How to convert linear root disk to RAID1 on RHEL 7/8/9 on Power Systems post-install?
  • Note: This procedure, along with any that involves moving /boot or /across devices (except via pvmove), is unsupported. Please ensure you take proper backups before attempting this procedure.

1 . Gather the partition information from your main disk /dev/sda.

# parted /dev/sda u s p
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 20971520s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start     End        Size       File system  Name                  Flags
 1      2048s     1230847s   1228800s   fat32        EFI System Partition  boot, esp
 2      1230848s  3327999s   2097152s   xfs
 3      3328000s  20969471s  17641472s                                     lvm

2 . Using the start and end sectors, reproduce the partitioning scheme from the previous command on the new unused disk.

# parted /dev/sdb mklabel gpt 
# parted /dev/sdb mkpart primary 2048s 1230847s  
# parted /dev/sdb mkpart primary 1230848s 3327999s 
# parted /dev/sdb mkpart primary 3328000s 20969471s 

3 . Add the RAID flag on all partitions that will be mirrored.

# parted /dev/sda set 1 raid on
# parted /dev/sda set 2 raid on
# parted /dev/sdb set 1 raid on
# parted /dev/sdb set 2 raid on

RHEL10 legacy based BIOS system use GPT partition table and it required bios_grub partition. Follow this steps for legacy based system using GPT partition table.

Set  pmbr_boot on.

 # parted /dev/vdb mklabel gpt disk_set pmbr_boot on 

Set  `bios_grub` partition.

# parted /dev/vdb set 1 bios_grub  on

Copy the contents of the bios_grub parttiion from /dev/vda1 to /dev/vdb1.

# dd if=/dev/vda1 of=/dev/vdb1

4 . Create a degraded RAID device on the first partition of the new disk. efi bootloader partition (/boot/efi). NOTE: Use --metadata=1.0 option to store /boot/efi on this device, otherwise the bootloader will not be able to read the metadata.

# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 --metadata=1.0

5 . Create file-system same as existing on /dev/sda1 on the new degraded RAID array /dev/md0. Since the first partition is the efi bootloader, we will need to set is to vfat (fat16)

# mkfs.vfat /dev/md0 

6 . Create a degraded RAID device for the second partition of the new disk. This will be used for your boot partition (/boot). NOTE: metadata on this partition will also need to be --metadata=1.0

# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2 --metadata=1.0

7 . Create filesystem the same as the one on /dev/sda2 on the new degraded RAID array /dev/md1. In this case, we will format as XFS.

# mkfs.xfs /dev/md1

8 . Copy data from /boot and /boot/efi to the RAID device

# mkdir /mnt/temp_boot
# mount /dev/md1 /mnt/temp_boot
# mkdir /mnt/temp_boot/efi
# mount /dev/md0 /mnt/temp_boot/efi
# rsync -a /boot/ /mnt/temp_boot
# rsync -a /boot/efi/ /mnt/temp_boot/efi
# umount /mnt/temp_boot/efi
# umount /mnt/temp_boot

9 . Unmount the current boot partitions and mount the new RAID volume there.
9a. Umount /boot/efi and /boot.

# umount /boot/efi
# umount /boot

9b. Find UUID of /dev/md0 and /dev/md1 using blkid.

# blkid |grep md[0-1]
/dev/md0: UUID="4D03-CDEC" TYPE="vfat"
/dev/md1: UUID="602abcd5-133f-4203-b81c-f6f8367ed103" TYPE="xfs"

9c. Updated /etc/fstab to reflect the new devices.

# cat /etc/fstab  |grep boot
#UUID=34fd98ed-c3c9-4966-8d03-e5c48b2a1925 /boot                   xfs     defaults        0 0 << Old /boot
UUID=602abcd5-133f-4203-b81c-f6f8367ed103 /boot                   xfs     defaults        0 0
#UUID=ECAC-613A          /boot/efi               vfat    umask=0077,shortname=winnt 0 2 << Old /boot/efi
UUID=4D03-CDEC          /boot/efi               vfat    umask=0077,shortname=winnt 0 2

9d. Reload systemctl daemon to reload all unit files.

# systemctl daemon-reload

9e. Mount /boot and /boot/efi using the new RAID volumes.

# mount /boot/
# mount /boot/efi/

10 . Create a degraded RAID device for the root device. This will be used for the LVM root partition. NOTE: default mdadm metadata is version 1.2 and the reserved metadata area is dependent on the RAID array size. To be safe and as to not cause an issue with LVM migration later, we will be using --metadata=1.0 instead, which places the metadata area a the end of the disk

# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3 --metadata=1.0

11 . Prepare the root RAID device for LVM

# pvcreate /dev/md2
# vgextend rhel /dev/md2

12 . Perform pvmove to move LV from the old device to the RAID device. NOTE: The pvmove process may take a while depending on the size of the LV. If this was done during production, it may take longer and can affect the overall performance of the system

# pvmove --atomic /dev/sda3 /dev/md2

When creating a degraded RAID, the available space in the LVM physical volume (PV) might be slightly reduced due to the higher "Data offset". As a result, there may not be enough space to perform a full and proper migration. In such cases, a possible workaround is to reduce the size of the swap volume slightly (by a few megabytes, depending on the Data Offset), by removing and recreating it.
13 . Remove the old partition from the volume group and LVM stack.

# vgreduce rhel /dev/sda3
# pvremove /dev/sda3 

14 . Add the old disks to the new arrays to complete the mirror.

# mdadm /dev/md0 -a /dev/sda1
# mdadm /dev/md1 -a /dev/sda2
# mdadm /dev/md2 -a /dev/sda3

14a. Monitor the RAID status and wait for the recovery to complete. Example:

# mdadm -D /dev/md0 
/dev/md2:
           Version : 1.0
     Creation Time : Sat Feb 10 11:52:52 2024
        Raid Level : raid1
        Array Size : 8820608 (8.41 GiB 9.03 GB)
     Used Dev Size : 8820608 (8.41 GiB 9.03 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Feb 10 12:00:51 2024
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 58% complete

              Name : localhost.localdomain:2  (local to host localhost.localdomain)
              UUID : fb8ef380:82c144c0:2cdebe0b:d76cba5f
            Events : 85

    Number   Major   Minor   RaidDevice State
       2       8        3        0      spare rebuilding   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

15 . Update lvm device file with new PV/RAID (/etc/lvm/devices/system.devices) and remove the old PV. Missing the new PV UUID in this device file can cause an issue for booting the system.

# lvmdevices --deldev /dev/sda3 

16 . Scan mdadm metadata and append RAID information to /etc/mdadm.conf.

# mdadm --examine --scan >/etc/mdadm.conf
# echo "MAILADDR root@$(hostname)" >> /etc/mdadm.conf

17 . Update grub configuration with MD device UUID.
17a. Find the md UUID

# mdadm -D /dev/md{0..2} | grep UUID
              UUID : 11d7411c:48fad6e6:1f304938:6fb1b4bd
              UUID : eaf251a3:9838fb5b:6175e750:5fa00181
              UUID : fb8ef380:82c144c0:2cdebe0b:d76cba5f

17b . Modify the grub environment file and boot loader configuration with md device UUIDs.

# grubby --update-kernel=ALL --args="rd.md.uuid=11d7411c:48fad6e6:1f304938:6fb1b4bd rd.md.uuid=eaf251a3:9838fb5b:6175e750:5fa00181 rd.md.uuid=fb8ef380:82c144c0:2cdebe0b:d76cba5f" 

# cat /boot/loader/entries/f518263cab064e11ac9a7287d5ca9d34-5.14.0-362.8.1.el9_3.x86_64.conf | grep options

options root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rd.md.uuid=11d7411c:48fad6e6:1f304938:6fb1b4bd rd.md.uuid=eaf251a3:9838fb5b:6175e750:5fa00181 rd.md.uuid=fb8ef380:82c144c0:2cdebe0b:d76cba5f

18 . Update EFI boot manager ( This step is not required for systems with legacy BIOS)
18a. Remove old EFI boot entry

# efibootmgr -v | grep Linux
Boot0007* Red Hat Enterprise Linux  HD(1,GPT,37261a3a-f6f6-490c-87d9-cbdc6dec45d9,0x800,0x12c000)/File(\EFI\redhat\shimx64.efi)
# efibootmgr -b 7 -B

18b. Add new entry to EFI bootmgr. grub does not support booting from a mdadm device, so both EFI partitions will need to be added to the bootmgr

# efibootmgr -c -d /dev/sda -p1 -l \\EFI\\redhat\\shimx64.efi -L "Red Hat Enterprise Linux"
# efibootmgr -c -d /dev/sdb -p1 -l \\EFI\\redhat\\shimx64.efi -L "Red Hat Enterprise Linux"
# efibootmgr -v
BootCurrent: 0007
Timeout: 0 seconds
BootOrder: 0009,0007,0001,0000,0002,0003,0004,0005,0006,0008
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* UEFI QEMU DVD-ROM QM00001   PciRoot(0x0)/Pci(0x1f,0x2)/Sata(0,65535,0)N.....YM....R,Y.
Boot0002* UEFI PXEv4 (MAC:5254002AE67F) PciRoot(0x0)/Pci(0x2,0x1)/Pci(0x0,0x0)/MAC(5254002ae67f,1)/IPv4(0.0.0.00.0.0.0,0,0)N.....YM....R,Y.
Boot0003* UEFI PXEv6 (MAC:5254002AE67F) PciRoot(0x0)/Pci(0x2,0x1)/Pci(0x0,0x0)/MAC(5254002ae67f,1)/IPv6([::]:<->[::]:,0,0)N.....YM....R,Y.
Boot0004* UEFI HTTPv4 (MAC:5254002AE67F)  PciRoot(0x0)/Pci(0x2,0x1)/Pci(0x0,0x0)/MAC(5254002ae67f,1)/IPv4(0.0.0.00.0.0.0,0,0)/Uri()N.....YM....R,Y.
Boot0005* UEFI HTTPv6 (MAC:5254002AE67F)  PciRoot(0x0)/Pci(0x2,0x1)/Pci(0x0,0x0)/MAC(5254002ae67f,1)/IPv6([::]:<->[::]:,0,0)/Uri()N.....YM....R,Y.
Boot0006* UEFI QEMU HARDDISK QM00005  PciRoot(0x0)/Pci(0x1f,0x2)/Sata(2,65535,0)N.....YM....R,Y.
Boot0007* Red Hat Enterprise Linux  HD(1,GPT,37261a3a-f6f6-490c-87d9-cbdc6dec45d9,0x800,0x12c000)/File(\EFI\redhat\shimx64.efi)
Boot0008* UEFI QEMU HARDDISK QM00007  PciRoot(0x0)/Pci(0x1f,0x2)/Sata(3,65535,0)N.....YM....R,Y.
Boot0009* Red Hat Enterprise Linux  HD(1,GPT,9f1063a8-25db-4528-98e3-c6c87bcda8aa,0x800,0x12c000)/File(\EFI\redhat\shimx64.efi)

19 . Update grub2.cfg and MBR.
19a. For UEFI based system.

# grub2-mkconfig -o /boot/grub2/grub.cfg  --update-bls-cmdline

19b. Update MBR on both disks in a system with legacy BIOS.

# grub2-install /dev/sda
# grub2-install /dev/sdb

19c. For RHEL9 and RHEL10 UEFI based systems, update /boot/efi/EFI/redhat/grub.cfg with the new /boot filesystem UUID.

# lsblk -o NAME,UUID,MOUNTPOINT |grep "/boot" 
│ └─md0                   F3E5-D66E                              /boot/efi
│ └─md1                   619ab6a2-40bc-4844-a81e-37d7ced0898e   /boot  <=====
│ └─md0                   F3E5-D66E                              /boot/efi
│ └─md1                   619ab6a2-40bc-4844-a81e-37d7ced0898e   /boot

# cat /boot/efi/EFI/redhat/grub.cfg 
search --no-floppy --fs-uuid --set=dev 619ab6a2-40bc-4844-a81e-37d7ced0898e      <====
set prefix=($dev)/grub2

export $prefix

20 . Rebuild initramfs image with mdadmconf.
It is recommended you make a backup copy of the initramfs in case the new version has an unexpected problem:

$ cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
# dracut -f --mdadmconf

21 . Reboot the machine to make sure everything is correctly utilizing the new software RAID devices.

SBR
Components
Category
Tags

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.