Migrating from upstream Ceph versions to RHCS1.3.3 on Red Hat Enterprise Linux 7.3

Solution Verified - Updated

Environment

  • Red Hat Ceph Storage 1.3.3
  • Upstream Ceph Firefly.

Issue

  • Migrating from upstream Ceph versions to RHCS1.3.3 on Red Hat Enterprise Linux 7.3

Resolution

  • If a Ceph cluster with upstream Ceph version on RHEL 7.3 has to be upgraded to Red Hat Ceph Storage (RHCS) v1.3.3, follow the below given approach:

  • Step 1: Upgrade your Ceph cluster to upstream Firefly

     If current Ceph cluster runs a version older than Firefly, such as Dumpling, follow the [upstream upgrade notes](http://docs.ceph.com/docs/master/install/upgrading-ceph/#dumpling-to-firefly) to upgrade the cluster to     Firefly.
    
  • Step 2: Upgrade your Ceph cluster from upstream Firefly to RHCS v1.3.3

Prerequisites

  • Disable all the thirdparty repositories including EPEL repositories , before installing/updating to RHCS v1.3.3 to avoid conflicts.

  • Red Hat® Ceph Storage for RHEL runs on RHEL 7.3 Server. For nodes that can connect to the internet, register each node with subscription-manager, attach the Ceph pools and enable the rhel-7-server-rpms repository , as described below.

  • Register each node with Subscription Management Service. Run the following command and enter Red Hat Customer Portal user name and password to register each node with Red Hat Subscription Manager:

    $sudo subscription-manager register

Note: Ceph relies on packages in the the RHEL 7.3 Base content set. If you are installing your cluster via ISO images without an internet connection, ensure that your nodes can access the full RHEL 7.3 Base content set in some way in order to resolve dependencies during installation. One way to do this is to use the Red Hat Satellite product in your environment. Another is to mount a local RHEL 7.3 Server ISO and point your Ceph cluster nodes to it. Contact support for additional details.

  • Red Hat Ceph Storage ships with two Stock Keeping Units (SKUs).

    i. Red Hat Ceph Storage for Management Nodes: The repositories for this SKU provide access to the installer, Calamari and Ceph monitors. You may use this SKU on up to six physical nodes.
    ii. Red Hat Ceph Storage: The repository for this SKU provides access to OSDs. You will need one SKU for each node containing Ceph OSDs.

Note: For ISO-based installations without access to the internet, DO NOT need to attach pools in the following steps. However, the Red Hat Enterprise Linux Server Base repository MUST BE enabled.

  • For nodes that can connect to the internet, On each node, pull the latest subscription data from the server.
   $sudo subscription-manager refresh
  • Then, list the available subscriptions.
   $sudo subscription-manager list --available

On Calamari node and Ceph monitor nodes, attach the pool ID for "Red Hat Ceph Storage for Management Nodes". On your OSD nodes, attach the pool ID for "Red Hat Ceph Storage".

   $sudo subscription-manager attach --pool=<pool-id>
  • Enable the Red Hat Enterprise Linux Server Base repository.
   $sudo subscription-manager repos --enable=rhel-7-server-rpms
  • Update to the latest RHEL 7.3 Server packages.
   $sudo yum update
  • To upgrade Ceph with a CDN-based installation, we recommend upgrading in the following sequence:

    i. Admin Node

    ii. Monitor Nodes (one at a time)

    iii. OSD Nodes (one at a time, preferably within a CRUSH hierarchy)

Calamari/Admin Node

  • To upgrade or install the Calamari/administration node, remove ceph repository under /etc/yum.repos.d directory, enable the Ceph Storage v1.3.3 repositories, update the Calamari/administration node, re-install ceph-deploy and re-initialize Calamari.
   $cd /etc/yum.repos.d/
   $sudo rm -rf ceph.repo
   $sudo subscription-manager repos --enable=rhel-7-server-rhceph-1.3-calamari-rpms --enable=rhel-7-server-rhceph-1.3-installer-rpms --enable=rhel-7-server-rhceph-1.3-tools-rpms
   $sudo yum update
   $sudo yum install ceph-deploy  calamari-server calamari-clients
   $mkdir ~/ceph-config
   $cd ~/ceph-config
   $sudo calamari-ctl initialize

Monitor Nodes

  • To upgrade a monitor node, log in to the node, remove ceph repository under /etc/yum.repos.d directory, enable the Ceph Storage v1.3.3 repositories, update the monitor node and stop Ceph monitor daemon. Then, reinstall the Ceph monitor daemon from the admin node. Finally restart the monitor daemon.

i. On the monitor node, execute:

   $cd /etc/yum.repos.d/
   $sudo rm -rf ceph.repo
   $sudo subscription-manager repos --enable=rhel-7-server-rhceph-1.3-mon-rpms
   $sudo yum update

ii. From the admin node, execute:

    $ceph-deploy install --mon <ceph-node>[<ceph-node> ...]

iii. From the monitor node, restart the Ceph Monitor daemon:

    $sudo /etc/init.d/ceph [options] restart mon.[id]

iv.Upgrade each monitor one at a time, and allow the monitor to come up and in, rejoining the monitor quorum, before proceeding to upgrade the next monitor.

OSD Nodes

  • First set the OSD noout flag. To prevent OSDs from getting marked down.
   $ sudo ceph osd set noout
  • Then set the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags to avoid unnecessary load on cluster and to avoid data migration when osd goes down when we restart osds:
$ ceph osd set nobackfill
$ ceph osd set norecover
$ ceph osd set norebalance
$ ceph osd set noscrub
$ ceph osd set nodeep-scrub
  • To upgrade a Ceph OSD node, log in to the node, remove ceph repository under /etc/yum.repos.d directory, enable the Ceph Storage v1.3.3 repositories, update the OSD node and stop OSD daemons. From the admin node, reinstall the Ceph OSD daemon. Finally, restart the OSDs.

i. On the OSD node, execute:

    $cd /etc/yum.repos.d/
    $sudo rm -rf ceph.repo
    $sudo subscription-manager repos --enable=rhel-7-server-rhceph-1.3-osd-rpms
    $sudo yum update

ii. From the admin node, execute:

   $ceph-deploy install --osd <ceph-node>[<ceph-node> ...]

iii. From the OSD node, restart the Ceph OSD daemons:

    $sudo /etc/init.d/ceph restart osd

iv. Upgrade each OSD node one at a time (preferably within a CRUSH hierarchy), and allow the OSDs to come up and in, the cluster achieving an active + clean state, before proceeding to upgrade the next OSD node.

  • Once all the OSDs are up and running, unset the noout flag
   $sudo ceph osd unset noout
  • Unset the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags.
$ ceph osd unset nobackfill
$ ceph osd unset norecover
$ ceph osd unset norebalance
$ ceph osd unset noscrub
$ ceph osd unset nodeep-scrub

Add Nodes to Calamari Server

  • From Calamari/Admin node , execute below :
   $ceph-deploy calamari connect --master <FQDN of Calamari node> [node1,node2.......]
   $sudo salt '*' state.highstate

To upgrade Ceph with a ISO-based installation, you must upgrade in the following sequence:

i. Admin Node (must be done first to support upgrading other daemons)

ii.Monitor Nodes (one at a time)

iii. OSD Nodes (one at a time, preferably within a CRUSH hierarchy)

Calamari/Admin Node

  • For ISO-based upgrades, remove ceph repository under /etc/yum.repos.d directory, download and mount the latest Ceph ISO, run ice_setup and re-initialize Calamari.

Remove ceph repository under /etc/yum.repos.d.

  $cd /etc/yum.repos.d
  $sudo rm -rf ceph.repo
    $sudo mount <path_to_iso>/rhceph-1.3.3-rhel-7-x86_64-rh.iso /mnt
  • Install the setup program.
    $sudo yum install /mnt/Installer/ice_setup-*.rpm
  • Change to your working directory. For example:
   $mkdir  ~/ceph-config
   $cd ~/ceph-config
  • Run ice_setup.
    $sudo ice_setup -d /mnt
  • The ice_setup program will install upgraded version of ceph-deploy, calamari server, create new local repositories and a cephdeploy.conf file.

  • Finally, restart Calamari

    $sudo yum install ceph-deploy  calamari-server calamari-clients
    $sudo calamari-ctl initialize

Monitor Nodes

  • To upgrade a monitor node, log in to the node, stop the monitor daemon and remove ceph.repo under /etc/yum.repos.d. Then, re-install the Ceph monitor daemon from the admin node. Finally, restart the monitor daemon.

i. On the monitor node, execute:

      $sudo rm /etc/yum.repos.d/ceph.repo

ii. From the admin node, execute:

    $ceph-deploy install --repo --release=ceph-mon <ceph-node>[<ceph-node> ...]
    $ceph-deploy install --mon <ceph-node>[<ceph-node> ...]

iii. From the monitor node, update to the latest packages and restart the Ceph Monitor daemon:

   $sudo yum update
   $sudo /etc/init.d/ceph [options] restart mon.[id]

iv. Upgrade each monitor one at a time, and allow the monitor to come up and in, rejoining the monitor quorum, before proceeding to upgrade the next monitor.

OSD Nodes

  • First set the OSD noout flag. To prevent OSDs from getting marked down.
    $ sudo ceph osd set noout
  • Then set the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags to avoid unnecessary load on cluster and to avoid data migration when osd goes down when we restart osds:
$ ceph osd set nobackfill
$ ceph osd set norecover
$ ceph osd set norebalance
$ ceph osd set noscrub
$ ceph osd set nodeep-scrub
  • To upgrade a Ceph OSD node, log in to the node, stop Ceph OSD daemon and remove ceph.repo under /etc/yum.repos.d. Then, re-install the Ceph OSD daemon from the admin node. Finally, restart the OSD daemon.

i. On the OSD node, execute:

     $sudo rm /etc/yum.repos.d/ceph.repo

ii. From the admin node, execute:

    $ceph-deploy install --repo --release=ceph-osd <ceph-node>[<ceph-node> ...]
    $ceph-deploy install --osd <ceph-node>[<ceph-node> ...]

iii. From the OSD node, update to the latest packages and restart the Ceph OSD daemons:

   $sudo yum update
   $sudo /etc/init.d/ceph [options] restart

iv. Upgrade each OSD node one at a time (preferably within a CRUSH hierarchy), and allow the OSDs to come up and in, the cluster achieving an active + clean state, before proceeding to upgrade the next OSD node.

  • Unset the OSD nobackfill, norecover, norebalance (from Hammer), noscrub and nodeep-scrub flags.
$ ceph osd unset nobackfill
$ ceph osd unset norecover
$ ceph osd unset norebalance
$ ceph osd unset noscrub
$ ceph osd unset nodeep-scrub

Add Nodes to Calamari Server

  • From Calamari/Admin node , execute below :
   $ceph-deploy calamari connect --master <FQDN of Calamari node> [node1,node2.......]
   $sudo salt '*' state.highstate
SBR
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.