How to convert linear root disk to RAID1 on RHEL 7 and higher on Power Systems post-install?
Environment
- Red Hat Enterprise Linux 7,8,9,10
- Power Systems (ppc64/ppc64le)
Issue
- How do I convert my root disk to RAID1 post installation of Red Hat Enterprise Linux 7/8/9/10 on power systems?
Resolution
-
This guide assumes you are utilizing the default partition scheme, but the steps can be modified to match any type of configuration.
-
This guide assumes that
/dev/vdais your main disk, and that it needs to be in a mirror with/dev/vdb. -
Downtime will be required for this operation. This will cause problems if attempted in a live production environment. It would be recommend to please backup the critical data and verify the backup before making any changes to the underlying storage,
lvmorfilesystemto avoid any unforeseen problem. -
Note: This procedure, along with any that involves moving
/bootor/across devices (except via pvmove), is unsupported. Please ensure you take proper backups before attempting this procedure.
-
Gather the partition information from your main disk
/dev/vda:# parted /dev/vda u s p Model: Virtio Block Device (virtblk) Disk /dev/vda: 31457280s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 10239s 8192s primary boot, prep 2 10240s 2107391s 2097152s primary xfs 3 2107392s 31457279s 29349888s primary lvm -
Using the start and end sectors, reproduce the partitioning scheme from the previous command on the new unused disk:
# parted /dev/vdb mklabel msdos # parted /dev/vdb mkpart primary 2048s 10239s # parted /dev/vdb mkpart primary 10240s 2107391s # parted /dev/vdb mkpart primary 2107392s 31457279s -
Copy the contents of the PowerPC Reference Platform (PReP) boot partition from /dev/vda1 to /dev/vdb1:
# dd if=/dev/vda1 of=/dev/vdb1 -
Update the prep and boot flag on the first partition of both disks and add lvm flag on third partition of new disk:
# parted /dev/vda set 1 prep on # parted /dev/vda set 1 boot on # parted /dev/vdb set 1 prep on # parted /dev/vdb set 1 boot on # parted /dev/vda set 2 raid on # parted /dev/vdb set 2 raid on # parted /dev/vda set 3 raid on # parted /dev/vdb set 3 raid on -
Create a degraded RAID device on the second partition of the new disk (Note: 1st partition will be PReP so we cannot bring it under raid). 2nd Partition will be used for boot partition (/boot).
# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/vdb2 -
Create file-system same as existing on
/dev/vda2on the new degraded RAID array/dev/md0:# mkfs.xfs /dev/md0 -
Mount the new raid array and copy over the files from /boot:
# mkdir /mnt/md0 # mount /dev/md0 /mnt/md0 # rsync -a /boot/ /mnt/md0/ # sync # umount /mnt/md0 # rmdir /mnt/md0 -
Unmount the current
/boot, and mount the new RAID volume there:# umount /boot # mount /dev/md0 /boot -
Add the old disk to the new array to complete the mirror:
# mdadm /dev/md0 -a /dev/vda2 -
Monitor the RAID status and wait for the recovery to complete:
# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Sep 25 21:13:18 2025 Raid Level : raid1 Array Size : 1048512 (1023.94 MiB 1073.68 MB) Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Sep 25 21:17:09 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : node3:0 (local to host node3) UUID : 2ea5e20b:4372f3f1:46a75165:9cafe86a Events : 35 Number Major Minor RaidDevice State 2 252 2 0 active sync /dev/vda2 1 252 18 1 active sync /dev/vdb2 -
Find UUID of /dev/md0 using
blkid:# blkid |grep md0 /dev/md0: UUID="4eda1a61-6f5c-4a65-bf1a-c582cc7b27b5" BLOCK_SIZE="512" TYPE="xfs" -
Update the
/etc/fstabfile with the new location for boot:# grep boot /etc/fstab #UUID=d0ba5a87-bae6-4c5c-a4f3-c13b9dcf312f /boot xfs defaults 0 0 UUID=4eda1a61-6f5c-4a65-bf1a-c582cc7b27b5 /boot xfs defaults 0 0 -
Create a degraded RAID device on the third partition of the new disk. This will be used for your LVM partition:
# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/vdb3 --metadata=1.2 -
Add this new array to your LVM stack and add it to your existing volume group:
# vgextend rhel /dev/md1 Physical volume "/dev/md1" successfully created Volume group "rhel" successfully extendedNote: mdadm metadata version 1.2 places the RAID superblock at 4K from the beginning of the disk and creates a reserved metadata area. This area size can vary depending on the actual array size AND can actually affect the PV allocation that is created on the disk. If you encounter an issue where pvmove returns insufficient extent in the next few steps, perform the following to workaround the issue. Alternatively, remove the swap volume and recreate it with a few megabytes less space so that it correctly fits within the physical volume.
14.a Remove the md device from the VG and stop it:
# vgreduce rhel /dev/md1 # mdadm --stop /dev/md114.b Recreate the md device with
metadataset to version 1.0, ignore any warning messages:# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/vdb3 --metadata=1.014.c Add the device back to the VG:
# vgextend rhel /dev/md1 -
Move the physical extents from the old partition to the new array:
# pvmove --atomic /dev/vda3 /dev/md1 -
Remove the old partition from the volume group and LVM stack:
# vgreduce rhel /dev/vda3 # pvremove /dev/vda3 -
[Only for RHEL7] There are some known issue with
lvmetadcache so it's better to disablelvmetadcache.
Modify use_lvmetad as below in lvm.conf:use_lvmetad = 1 to use_lvmetad = 0Stop lvm2-lvmetad service:
$ systemctl stop lvm2-lvmetad.service $ systemctl disable lvm2-lvmetad.service -
Add the old partition to the degraded array to complete the mirror:
# mdadm /dev/md1 -a /dev/vda3 -
Monitor the RAID status and wait for the recovery to complete:
# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Thu Sep 25 21:21:45 2025 Raid Level : raid1 Array Size : 14674816 (13.99 GiB 15.03 GB) Used Dev Size : 14674816 (13.99 GiB 15.03 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Sep 25 21:47:49 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : node3:1 (local to host node3) UUID : dbabe015:0d22b1a3:0dc8f2a8:b2323b60 Events : 89 Number Major Minor RaidDevice State 2 252 3 0 active sync /dev/vda3 1 252 19 1 active sync /dev/vdb3 -
[Only for RHEL9 and RHEL10] Update the raid device in lvm
system.devicesfile.# lvmdevices --deldev /dev/vda3 # lvmdevices --adddev /dev/md1 -
Scan mdadm metadata and append RAID information to
/etc/mdadm.conf:# mdadm --examine --scan > /etc/mdadm.conf -
Update
/etc/default/grubwith MD device UUID:# mdadm -D /dev/md* |grep UUID UUID : 2ea5e20b:4372f3f1:46a75165:9cafe86a UUID : dbabe015:0d22b1a3:0dc8f2a8:b2323b60 # grep GRUB_CMDLINE_LINUX /etc/default/grub #GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet" GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet rd.md.uuid=2ea5e20b:4372f3f1:46a75165:9cafe86a rd.md.uuid=dbabe015:0d22b1a3:0dc8f2a8:b2323b60" -
Update
grub2.cfg:# grub2-mkconfig -o /boot/grub2/grub.cfg
Use below command to update grub configuration in RHEL9 and RHEL10.
```
# grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline
```
-
Verify that both of your disks are listed in
/boot/grub/device.map. Add them if needed.# cat /boot/grub2/device.map (hd0) /dev/vda (hd1) /dev/vdb -
Reinstall
grub2-ppc64leandgrub2-ppc64le-modulespackage and then run grub2-install (Ignore the warning it reports while running grub2-install command for prep partition):# yum reinstall grub2-ppc64le grub2-ppc64le-modules -y # grub2-install /dev/vda1 # grub2-install /dev/vdb1Note: Here we used ppc64le package as test system was ppc64le architecture but if its ppc64 then use
grub2-ppc64andgrub2-ppc64-modulespackage. -
Rebuild
initramfs imagewithmdadmconf.
It is recommended to make a backup of initramfs in case the new version has an unexpected problem:$ cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak # dracut -f --mdadmconf -
Reboot the machine to make sure everything is correctly utilizing the new software RAID devices.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.