Procedure to migrate Red Hat Ceph Storage 1.3.3 from Ubuntu to Red Hat Enterprise Linux 7.3

Solution Verified - Updated

Environment

  • Red Hat Ceph Storage 1.3.3
  • Red Hat Enterprise Linux 7.3
  • Ubuntu 12.04 and 14.04

Issue

  • Procedure to migrate Red Hat Ceph Storage 1.3.3 from Ubuntu to Red Hat Enterprise Linux 7.3.

Resolution

Important Note: Do not use this article without discussing with Red Hat Support. Please open a support case for it. As this process is generally carried out by our Red Hat Consulting team who approach this as a project and study the system before implementing.

This article covers the migration of a Ceph cluster from Ubuntu to RHEL 7.3. The Ceph version could be upstream or downstream.


Migrating Monitor nodes

This procedure should be performed for each MON node in the Ceph cluster, but only for 1 MON node at a time. Ensure the current MON has returned to normal operation PRIOR to proceeding to the next node to prevent data access issues.


    $ ceph-deploy mon destroy <hostname>

This command will remove the monitor from the existing cluster.

  • Verify the monitor is removed using:
   $ sudo ceph -s 
   $ sudo ceph mon dump

NOTE: Make sure the IP/Hostname of the node does not change after the RHEL 7.3 installation

  • Reboot the node and Install Red Hat Enterprise Linux 7.3
  • Configure iptables to allow connections to the required ports
  • Disable the selinux
  • The new RHEL 7.3 server should be synced to the same time server as the rest of the cluster.
  • After installation create the ceph user, If the earlier installation was not done with the root user.
  • Copy the ceph.conf and ceph.client.admin.keyring from any of the existing node, under /etc/ceph/ directory in the migrating node.
  • Install ceph-mon package, if the original Ceph cluster was using upstream, configure upstream repos for RHEL. If the original Ceph cluster was using Red Hat Ceph Storage, subscribe to Red Hat Ceph Storage repos for ceph-mon. All other third party repos must be disabled to avoid conflicts.
   $ sudo yum install ceph-mon
  • From the Admin node:
   $ ceph-deploy mon add <hostname>
  • Verify the monitor is added using:
   $ sudo ceph -s 
   $ sudo ceph mon dump

For more information on MON installation, refer to the This content is not included.Red Hat Ceph Storage 1.3 Installation Guide

  • Wait for the cluster state to become healthy and follow the same procedure to migrate the other MON nodes, one by one.

Migrating OSD nodes


**This procedure should be performed for each OSD node in the Ceph cluster, but typically only for 1 OSD node at a time (a maximum of 1 failure domains worth of OSD nodes may be performed in parallel. e.g. If per-rack replication is in use, 1 entire rack's OSD nodes can be upgraded in parallel). Ensure the current OSD node's OSDs have returned to normal operation and the cluster's PGs are 100% active+clean PRIOR to proceeding to the next to prevent data access issues.**
  • First set the OSD noout flag. To prevent OSDs from getting marked down.
    $ sudo ceph osd set noout
  • Then set the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags to avoid unnecessary load on cluster and to avoid data migration when this node goes down for migration:
    $ ceph osd set nobackfill
    $ ceph osd set norecover
    $ ceph osd set norebalance
    $ ceph osd set noscrub
    $ ceph osd set nodeep-scrub
  • Gracefully shutdown all the OSD processes present in the node.
   $ sudo stop ceph-osd-all

NOTE1: Leave the OSD disk drives untouched during the RHEL 7.3 Installation
NOTE2: Make sure the IP/Hostname of the node does not change after the RHEL 7.3 installation

  • Reboot the node and Install Red Hat Enterprise Linux 7.3

  • Configure the iptables rules to allow connections to the required ports.

  • Disable selinux.

  • The new RHEL 7.3 server should be synced to the same time server as the rest of the cluster.

  • After installation create the ceph user, If the earlier installation was not done with the root user.

  • Copy the ceph.conf from any of the existing node, under /etc/ceph/ directory in the migrating node.

  • Install ceph-osd package. If the original Ceph cluster was using upstream packages, configure upstream repos for RHEL. If the original Ceph cluster was using Red Hat Ceph Storage, subscribe to Red Hat Ceph Storage repos for OSD. All other third party repos must be disabled to avoid conflicts.

   $ sudo yum install ceph-osd
  • Refresh the partition table:
   $ sudo partprobe
  • Wait a few moments, then check to see if the OSD partitions are mounted and the ceph-osd processes are running. Also verify the following outputs
   $ sudo ceph -s
   $ sudo ceph osd tree
  • Once all the OSDs in this node are up and running, unset the noout flag
   $ sudo ceph osd unset noout
  • Unset the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags.
   $ ceph osd unset nobackfill
   $ ceph osd unset norecover
   $ ceph osd unset norebalance
   $ ceph osd unset noscrub
   $ ceph osd unset nodeep-scrub

For more information on OSD installation, refer the This content is not included.Red Hat Ceph Storage 1.3 Installation Guide

  • Wait for the cluster state to become healthy and follow the same procedure to Migrate the other OSD nodes, one by one.

If the Ceph upstream cluster was migrated to RHEL , use Migrating from upstream Ceph versions to RHCS1.3.3 on Red Hat Enterprise Linux 7.3 to convert the hosts to Red Hat Ceph Storage 1.3.3 following this procedure.

Red Hat Ceph Storage 1.2.3 clients compatibility with Red Hat Ceph Storage 1.3.3 cluster

  • Compatibility should not be a problem unless you modify the CRUSH map, such as adding a new node or changing the crush hierarchy.
  • NOTE: You should not perform any CRUSH related modification activity until your full cluster, and all clients, have been migrated to Red Hat Ceph Storage 1.3.3 and Red Hat Enterprise Linux 7.3.
  • Once the cluster and clients are migrated to Red Hat Ceph Storage 1.3.3, then CRUSH modification should be doable.
SBR
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.