How do I convert my root disk to RAID1 after installation of Red Hat Enterprise Linux 7?
Environment
- Red Hat Enterprise Linux (RHEL) 7
- Logical Volume Manager (LVM)
- Software Raid (mdadm)
Issue
- Need to convert a non-raided root disk to
RAID1 mirrorafter installation of Red Hat Enterprise Linux 7.
Resolution
- This guide assumes you are utilizing the default partition scheme, but the steps can be modified to match any type of configuration.
- This guide assumes that
/dev/sdais your main disk, and that it needs to be in a mirror with/dev/sdb. - Downtime will be required for this operation. This will cause problems if attempted in a live production environment. It would be recommend to please backup the critical data and verify the backup before making any changes to the underlying storage,
lvmorfilesystemto avoid any unforeseen problem. - For instruction on converting a EFI based system, please see How to convert single disk EFI boot to software RAID after installation
- Note: This procedure, along with any that involves moving
/bootor/across devices (except via pvmove), is unsupported. Please ensure you take proper backups before attempting this procedure.
1 . Gather the partition information from your main disk /dev/sda.
# parted /dev/sda u s p
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 16777216s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2048s 1026047s 1024000s primary xfs boot
2 1026048s 16777215s 15751168s primary lvm
2 . Using the start and end sectors, reproduce the partitioning scheme from the previous command on the new unused disk.
# parted /dev/sdb mklabel msdos
# parted /dev/sdb mkpart primary 2048s 1026047s
# parted /dev/sdb mkpart primary 1026048s 16777215s
3 . Add the RAID flag on all partitions that will be mirrored.
# parted /dev/sda set 1 raid on
# parted /dev/sda set 2 raid on
# parted /dev/sdb set 1 raid on
# parted /dev/sdb set 2 raid on
- Create a degraded RAID device on the first partition of the new disk. This will be used for your boot partition (/boot). NOTE: Use
--metadata=1.0option to store/booton this device, otherwise thebootloaderwill not be able to read the metadata.
# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 --metadata=1.0
5 . Create file-system same as existing on /dev/sda1 on the new degraded RAID array /dev/md0. xfs is the default filesystem in Red Hat Enterprise Linux 7.
# mkfs.xfs /dev/md0
6 . Mount the new raid array and copy over the files from /boot.
# mkdir /mnt/md0
# mount /dev/md0 /mnt/md0
# rsync -a /boot/ /mnt/md0/
# sync
# umount /mnt/md0
# rmdir /mnt/md0
7 . Unmount the current /boot, and mount the new RAID volume there.
# umount /boot
# mount /dev/md0 /boot
8 . Add the old disk to the new array to complete the mirror.
# mdadm /dev/md0 -a /dev/sda1
9 . Monitor the RAID status and wait for the recovery to complete.
# mdadm -D /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Thu Jun 23 15:50:45 2016
Raid Level : raid1
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 511936 (500.02 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Jun 23 15:56:50 2016
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 20% complete
UUID : b77a60f6:0161d823:98d3e032:c206240a (local to host rhel7-node1.example.com)
Events : 0.24
Number Major Minor RaidDevice State
2 8 1 0 spare rebuilding /dev/sda1
1 8 17 1 active sync /dev/sdb1
10 . Find UUID of /dev/md0 using blkid.
# blkid |grep md0
/dev/md0: UUID="25634ab8-715f-40b8-a073-57d8e0f426ae" TYPE="xfs"
11 . Update the /etc/fstab file with the new location for boot.
# grep boot /etc/fstab
#UUID=2578f525-586d-48bf-92ea-b5597a166355 /boot xfs defaults 1 2
UUID=25634ab8-715f-40b8-a073-57d8e0f426ae /boot xfs defaults 1 2
12 . Create a degraded RAID device on the second partition of the new disk. This will be used for your LVM partition(/).
# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2 --metadata=1.2
13 . Add this new array to your LVM stack and add it to your existing volume group.
# vgextend rhel /dev/md1
Physical volume "/dev/md1" successfully created
Volume group "rhel" successfully extended
NOTE: mdadm metadata version 1.2 places the RAID superblock at 4K from the beginning of the disk and creates a reserved metadata area. This area size can vary depending on the actual array size AND can actually affect the PV alocation that is created on the disk. If you encounter an issue where pvmove returns insufficient extent in the next few steps, perform the following to workaround the issue:
13a . Remove the md device from the VG and stop it
# vgreduce rhel /dev/md1
# mdadm --stop /dev/md1
13b . Recreate the md device with metadata set to version 1.0, ignore any warning messages
# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2 --metadata=1.0
13c . Add the device back to the VG
# vgextend rhel /dev/md1
14 . Move the physical extents from the old partition to the new array.
# pvmove /dev/sda2 /dev/md1
15 . Remove the old partition from the volume group and LVM stack.
# vgreduce rhel /dev/sda2
# pvremove /dev/sda2
16 . At this moment their are many known issue with lvmetad cache existing in RHEL7. So it's better to disable lvmetad cache.
Modify use_lvmetad as below in lvm.conf
use_lvmetad = 1
to
use_lvmetad = 0
Stop lvm2-lvmetad service.
$ systemctl stop lvm2-lvmetad.service
$ systemctl disable lvm2-lvmetad.service
17 . Add the old partition to the degraded array to complete the mirror.
# mdadm /dev/md1 -a /dev/sda2
18 . Monitor the RAID status and wait for the recovery to complete.
# mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Thu Jun 23 15:59:42 2016
Raid Level : raid1
Array Size : 7875520 (7.51 GiB 8.06 GB)
Used Dev Size : 7875520 (7.51 GiB 8.06 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Jun 23 16:06:52 2016
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 1% complete
Name : rhel7-node1.example.com:1 (local to host rhel7-node1.example.com)
UUID : 50171baa:93475839:51ada4d3:b1c34908
Events : 60
Number Major Minor RaidDevice State
2 8 2 0 spare rebuilding /dev/sda2
1 8 18 1 active sync /dev/sdb2
19 . Scan mdadm metadata and append RAID information to /etc/mdadm.conf.
# mdadm --examine --scan >/etc/mdadm.conf
20 . Update /etc/default/grub with MD device UUID.
# mdadm -D /dev/m* |grep UUID
UUID : 50171baa:93475839:51ada4d3:b1c34908
UUID : b77a60f6:0161d823:98d3e032:c206240a
# grep GRUB_CMDLINE_LINUX /etc/default/grub
#GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet"
GRUB_CMDLINE_LINUX="rd.md.uuid=50171baa:93475839:51ada4d3:b1c34908 rd.md.uuid=b77a60f6:0161d823:98d3e032:c206240a rd.lvm.lv=rhel/root crashkernel=auto rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet"
21 . Update grub2.cfg.
# grub2-mkconfig -o /boot/grub2/grub.cfg
22 . Verify that both of your disks are listed in /boot/grub/device.map. Add them if needed.
# cat /boot/grub2/device.map
(hd0) /dev/sda
(hd1) /dev/sdb
23 . Re-install grub on both disk.
Kindly Note: Use --metadata=0.9 for boot device if below command fails with error -> cannot find /dev/md0 in /dev/sd* device
# grub2-install /dev/sda
# grub2-install /dev/sdb
24 . Rebuild initramfs image with mdadmconf.
It is recommended you make a backup copy of the initramfs in case the new version has an unexpected problem:
$ cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
# dracut -f --mdadmconf
25 . Reboot the machine to make sure everything is correctly utilizing the new software RAID devices.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.