Updating Red Hat Ceph Storage deployed as a Container Image

Updated

This article describes how to upgrade to a newer minor or major version of the Red Hat Ceph Storage container image.
IMPORTANT

Upgrading Red Hat Ceph Storage Deployed as a Container Using Ansible

Use the Ansible rolling_update.yml playbook to upgrade a cluster.

  1. On the Ansible Administration node, enable the Red Hat Ceph Storage 4 Tools repository based on the Red Hat Enterprise Linux version:
   # subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms
   # subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
  1. Update ceph-ansible:
   # dnf update ceph-ansible
  1. Navigate to the /usr/share/ceph-ansible/ directory:
   $ cd /usr/share/ceph-ansible/
  1. In the group_vars/all.yml file change the ceph_docker_image_tag parameter to point to the newer Ceph container version, for example:
   ceph_docker_image_tag: latest
  1. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface parameter to the group_vars/all.yml file:
   radosgw_interface: <interface>
  1. Copy rolling_update.yml from the infrastructure-playbooks directory to the current directory:
   # cp infrastructure-playbooks/rolling_update.yml .
  1. Run the playbook:
   $ ansible-playbook rolling_update.yml -e  jewel_minor_update=true
  1. On the RBD mirroring daemon node, upgrade rbd-mirror manually:
   # yum upgrade rbd-mirror
Restart the daemon:
   # systemctl restart ceph-rbd-mirror@<client-id>

Upgrading Red Hat Ceph Storage Deployed as a Container Manually

Red Hat recommends to upgrade the Ceph nodes in the following order:

  • Ceph Monitors
  • Ceph OSDs
  • Ceph Object Gateway
  • Ceph Metadata Server

To upgrade a Ceph node to a newer version:

  1. Stop each Ceph daemon service one by one:

    systemctl stop <daemon>@<ID>.service
    

    Replace <daemon>with ceph-mon, ceph-osd, ceph-rgw, or ceph-mds depending on what type of Ceph daemon you are upgrading.

    For Monitor instances, replace <ID> with the host name of the Monitor node, for example:

    # systemctl stop ceph-mon@dhcp47-115.service
    

    For OSD instances, replace <ID> with the device on which the OSD instance is running, for example:

    # systemctl stop ceph-osd@sdb.service
    

    For Ceph Object Gateway instances, replace <ID> with the host name of the Ceph Object Gateway node, for example:

    # systemctl stop ceph-rgw@dhcp47-115.service
    

    For Metadata Server instances, replace <ID> with the host name of the Metadata Server node, for example:

    # systemctl stop ceph-mds@dhcp47-115.service
    
  2. Pull the updated Red Hat Ceph Storage container image:

    docker pull registry.access.redhat.com/rhceph/<image_name>
    

    Specify the image name, for example:

    # docker pull registry.access.redhat.com/rhceph/rhceph-1.3-rhel7
    

    NOTE: You can also use the This content is not included.Container Catalog on the Red Hat Customer Portal. The Catalog lists all container images provided by Red hat. In addition, it contains information how to download the images on various platforms.

  3. If you are upgrading to a new container image:

    a) Edit the systemd service configuration file for the OSD or Monitor daemon to configure the daemon to use the new image. For OSDs, edit the /usr/share/ceph-osd-run.sh file. For all other instances, edit the appropriate file in the /etc/systemd/system/multi-user.target.wants/ directory. Replace

    registry.access.redhat.com/rhceph/<old-image>
    

    with

    registry.access.redhat.com/rhceph/<new-image>
    

    b) Reload the systemd service:

    # systemctl daemon-reload
    
  4. Start each daemon again:

    systemctl start <daemon>@<ID>.service
    

    Replace <daemon> with ceph-mon, ceph-osd, ceph-rgw, or ceph-mds depending on what type of Ceph daemon you are upgrading.

    For Monitor instances, replace <ID> with the host name of the Monitor node, for example:

    # systemctl start ceph-mon@dhcp47-115.service
    

    For OSD instances, replace <ID> with the device on which the OSD instance is running:

    # systemctl start ceph-osd@sdb.service
    

    For Ceph Object Gateway instances, replace <ID> with the host name of the Ceph Object Gateway node, for example:

    # systemctl start ceph-rgw@dhcp47-115.service
    

    For Metadata Server instances, replace <ID> with the host name of the Metadata Server node, for example:

    # systemctl start ceph-mds@dhcp47-115.service
    

Before proceeding to upgrade another Ceph node, verify that the cluster health is OK. To do so, perform the following steps from the node with a Ceph Monitor container:

  1. List all running containers:

    # docker ps
    
  2. Verify that the cluster health is OK:

    docker exec <container-name> ceph -s
    

    Replace <container-name> with the name of the Ceph Monitor container found in the first step, for example:

    # docker exec ceph-mon0 ceph -s
    

NOTE

After upgrading the cluster, ensure the latest version of the ceph-ansible package is installed:

# yum update ceph-ansible

This will help you to use the latest ceph-ansible while managing your cluster after upgrading.

See the section Upgrading a Red Hat Ceph Storage cluster for more details.

Article Type