How to upgrade from RHEL 5 with a gfs or gfs2 filesystem to RHEL6 or RHEL 7?

Solution Unverified - Updated

Environment

  • Red Hat Enterprise Linux Server 5 (with the Clustering and Cluster-Storage)
  • Red Hat Enterprise Linux Server 6 (with the High Availability and Resilient Storage Add Ons)
  • Red Hat Enterprise Linux Server 7 (with the High Availability and Resilient Storage Add Ons)
  • A Global Filesystem (GFS) or a Global Filesystem 2(GFS2)

Issue

  • We have a RHEL 5 based cluster with a GFS filesystem and we want to upgrade to RHEL 6/RHEL 7 and use GFS2. How do we do that?
  • Can we use data from a GFS filesystem on a RHEL 5 cluster on a new RHEL 6/RHEL 7 cluster?
  • Can a GFS2 filesystem from a RHEL 5 cluster be used on a RHEL 6/RHEL 7 cluster?

Resolution

Please note that we do not support upgrading from RHEL 5 to RHEL 7. A fresh install of RHEL 7 will be required as stated in the following articles:

NOTE: There is no way to determine how long it will take for a file system check to complete with the utilities fsck.gfs or fsck.gfs2.
NOTE: Before installing RHEL 6/RHEL 7, it is recommended that a backup or SAN hardware snapshots (preferred method) be taken of the GFS filesystem. For more information on backups or hardware snapshots of a GFS or GFS2 file system then see the following article: What are some best practices when running a backup of a GFS2 file system in a RHEL Resilient Storage cluster?
NOTE: All nodes in the cluster must have the same operating system version installed. Mixed clusters are not supported.
NOTE: The RHEL anaconda installer can sometimes re-format SAN LUNs as part of the OS install process, thereby, destroying any existing GFS/GFS2 file systems. You may want to either disconnect the HBA or use the array's configuration GUI to disable access to it during installation.

Existing GFS2 file system


The GFS2 file system is compatible with all releases of RHEL 5, RHEL 6 and RHEL 7. If you update your cluster nodes to RHEL 6 (from RHEL 5) or to RHEL 7 (from RHEL 5/6) you should be able to use the GFS2 file system without modification.
Procedure
  • Based on your target RHEL version, follow the procedure to upgrade or fresh install the cluster nodes to RHEL6 or RHEL 7.
  • Make sure the logical volume/block device containing the GFS2 file system is accessible from the newly installed nodes. You might need to install the clustering bits, configure them, and bring up the cluster in order to access the logical volume/device.
  • Update gfs2-utils to the latest version.
  • Ensure that the gfs2 file system is unmounted on all cluster nodes.
  • Run fsck.gfs2 on the file system from one node.
  • After it completes successfully, verify that the GFS2 file system can be mounted.

RHEL 5 with existing GFS file system


If you are upgrading from RHEL 5 with a GFS2 file system to RHEL 6 or RHEL 7, you will need to update to a GFS2 file system because the GFS file system has been deprecated in RHEL6 and above. **RHEL 6 and RHEL 7 cannot mount a GFS (or gfs1) filesystem.** The `fsck.gfs2`, `mkfs.gfs2` and `gfs2_convert` tools that are part of `gfs2-utils` in RHEL 7 are capable of checking, creating a new/fresh filesystem and converting GFS file systems to GFS2 respectively.

There are three ways to migrate to GFS2 and perform the operating system upgrade:

  • Data copy of GFS file system to a new GFS2 file system (preferred method).
  • Data copy of GFS file system to a new GFS2 file system using a staging file system.
  • In-place file system conversion of GFS file system to GFS2 file system after install to RHEL 6/RHEL 7 using gfs2_convert.

Please carefully review the methods below to evaluate which one is best for your environment.


Data copy of GFS file system to a new GFS2 file system


This procedure copies the data from a RHEL 5 GFS filesystem to a newly created RHEL 5 GFS2 filesystem before installing RHEL 6 or RHEL 7.
Considerations:
  • This method requires a new logical volume/device large enough to hold all the current data in the GFS file system. A logical volume/device of the same size or larger than the GFS file system should work well.
  • Aging file systems develop fragmentation over time. Copying over data to a new file system eliminates this fragmentation, resulting in better performance.
  • Data copy can be slow, given that the data is physically copied from one device to another.
  • This procedure is performed entirely in RHEL 5 before installing RHEL 7
    ######Procedure
  • Update gfs-utils and gfs2-utils to the latest version.
  • Unmount the GFS file system on all cluster nodes.
  • Run fsck.gfs on file system.
  • Run mkfs.gfs2 on the target volume/device to create a new GFS2 file system.
  • Mount both the GFS (/mnt/old_gfs1) and GFS2 file systems (/mnt/new_gfs2)
  • Copy the data using a copy utility like rsync: rsync -av /mnt/old_gfs1/* /mnt/new_gfs2/
  • After successful copy, unmount both file systems

Follow the procedure to perform a fresh RHEL 6 or RHEL 7 install on the cluster nodes and verify that the target GFS2 file system can be mounted. You might need to install clustering bits, configure them and bring up the cluster before being able to mount GFS2.


Data copy of GFS file system to a new GFS2 file system using a staging file system


This procedure copies the data from a RHEL 5 GFS filesystem to another RHEL 5 non-clustered staging file system (like ext3). After installing RHEL 6/RHEL 7 and creating a new GFS2 file system, the data is copied over from the non-clustered staging file system.
Considerations:
  • This method requires a new file system (ext3/4, or GFS2), preferably created on a separate logical volume/device large enough to hold all the current data in the GFS file system.
  • In addition to eliminating the performance-degrading fragmentation from the aging GFS file system, copying the data into the target GFS2 file system in RHEL 6/RHEL 7 has the advantage of using the newer GFS2's Orlov allocator that is not available in RHEL 5. The Orlov allocator provides better locality for related files and enhanced performance when resource groups are under contention.
  • Data is copied twice - Once from GFS to the staging fs and again from the staging fs to GFS2. Depending upon how much data there is to copy this method might be prohibitively slow.
    ######Procedure
  • Update gfs-utils to the latest version.
  • Unmount the GFS file system on all cluster nodes

On one of the RHEL 5 nodes, do the following:

  • Run fsck.gfs on the file system
  • Create a new staging file system (ext3/4, or GFS2) on a separate logical volume/device that is large enough to hold all the current data in the GFS file system
  • Mount both the GFS(/mnt/old_gfs1) and the staging file system (/mnt/staging)
  • Copy the data using a copy utility like rsync: rsync -av /mnt/old_gfs1/* /mnt/staging/
  • After successful copy, unmount both file systems.

Follow the procedure to perform a fresh RHEL 6 or RHEL 7 install on all the cluster nodes

  • Install the clustering bits, configure them, and bring up the cluster.

On one of the (now upgraded) nodes, do the following:

  • Run mkfs.gfs2 on the target logical volume/device to create a new GFS2 file system. Note that if the target device is the same device containing the GFS file system, the GFS file system and its data will be completely lost after this step.
  • Mount both the staging (/mnt/staging) and GFS2 file systems (/mnt/new_gfs2)
  • Copy the data using a copy utility like rsync: rsync -av /mnt/staging/* /mnt/new_gfs2/
  • After successful copy unmount both file systems

In-place file system conversion of GFS filesystem to GFS2 filesystem


This solution will use the binaries `fsck.gfs2` and `gfs2_convert` from RHEL 6/RHEL 7 to do the file system check and file system conversion.
Considerations:
  • Before converting the GFS file system, it is strongly recommended that you back up the GFS file system, since the conversion process is irreversible and any errors encountered during the conversion can result in the abrupt termination of the program and consequently an unusable file system.
  • If the conversion from GFS to GFS2 is interrupted by a power failure or any other issue, restart the conversion tool. Do not attempt to execute the fsck.gfs2 command on the file system until the conversion is complete.
  • When converting full or nearly full file systems, it is possible that there will not be enough space available to fit all the GFS2 file system data structures. In such cases, the size of all the journals is reduced uniformly such that everything fits in the available space.
  • GFS2 file systems do not provide support for Context-Dependent Path Names (CDPNs). The gfs2_convert command identifies CDPNs and replaces them with empty directories with the same name. To achieve the same functionality as CDPNs in GFS2 file systems, you can use the bind option of the mount command. Refer to the section Conversion of Context-dependent Path Names for more information.

If you have the capability to backup the GFS file system, you might want to consider using one of the alternate "Data copy" methods above.

Procedure
  • Unmount the GFS file system on all cluster nodes.
  • Follow the procedure to perform a fresh RHEL 6 or RHEL 7 install on all the cluster nodes.
  • Update gfs2-utils to the latest version.
  • Run fsck.gfs2 on the GFS file system. (Newer versions of fsck.gfs2 are capable of checking older GFS file systems as well as GFS2.)
  • Upon successful completion of fsck.gfs2, run gfs2_convert on the file system.
  • After successful conversion, we recommend the file system have fsck.gfs2 run against it.
  • Install the clustering bits, configure them, and bring up the cluster.
  • Verify that the converted GFS2 file system can now be mounted.
SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.