Updating Red Hat Ceph Storage deployed as a Container Image
This article describes how to upgrade to a newer minor or major version of the Red Hat Ceph Storage container image.
IMPORTANT
Upgrading Red Hat Ceph Storage Deployed as a Container Using Ansible
Use the Ansible rolling_update.yml playbook to upgrade a cluster.
- On the Ansible Administration node, enable the Red Hat Ceph Storage 4 Tools repository based on the Red Hat Enterprise Linux version:
# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms
# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
- Update
ceph-ansible:
# dnf update ceph-ansible
- Navigate to the
/usr/share/ceph-ansible/directory:
$ cd /usr/share/ceph-ansible/
- In the
group_vars/all.ymlfile change theceph_docker_image_tagparameter to point to the newer Ceph container version, for example:
ceph_docker_image_tag: latest
- If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the
radosgw_interfaceparameter to thegroup_vars/all.ymlfile:
radosgw_interface: <interface>
- Copy
rolling_update.ymlfrom the infrastructure-playbooks directory to the current directory:
# cp infrastructure-playbooks/rolling_update.yml .
- Run the playbook:
$ ansible-playbook rolling_update.yml -e jewel_minor_update=true
- On the RBD mirroring daemon node, upgrade
rbd-mirrormanually:
# yum upgrade rbd-mirror
Restart the daemon:
# systemctl restart ceph-rbd-mirror@<client-id>
Upgrading Red Hat Ceph Storage Deployed as a Container Manually
Red Hat recommends to upgrade the Ceph nodes in the following order:
- Ceph Monitors
- Ceph OSDs
- Ceph Object Gateway
- Ceph Metadata Server
To upgrade a Ceph node to a newer version:
-
Stop each Ceph daemon service one by one:
systemctl stop <daemon>@<ID>.serviceReplace
<daemon>withceph-mon,ceph-osd,ceph-rgw, orceph-mdsdepending on what type of Ceph daemon you are upgrading.For Monitor instances, replace
<ID>with the host name of the Monitor node, for example:# systemctl stop ceph-mon@dhcp47-115.serviceFor OSD instances, replace
<ID>with the device on which the OSD instance is running, for example:# systemctl stop ceph-osd@sdb.serviceFor Ceph Object Gateway instances, replace
<ID>with the host name of the Ceph Object Gateway node, for example:# systemctl stop ceph-rgw@dhcp47-115.serviceFor Metadata Server instances, replace
<ID>with the host name of the Metadata Server node, for example:# systemctl stop ceph-mds@dhcp47-115.service -
Pull the updated Red Hat Ceph Storage container image:
docker pull registry.access.redhat.com/rhceph/<image_name>Specify the image name, for example:
# docker pull registry.access.redhat.com/rhceph/rhceph-1.3-rhel7NOTE: You can also use the This content is not included.Container Catalog on the Red Hat Customer Portal. The Catalog lists all container images provided by Red hat. In addition, it contains information how to download the images on various platforms.
-
If you are upgrading to a new container image:
a) Edit the systemd service configuration file for the OSD or Monitor daemon to configure the daemon to use the new image. For OSDs, edit the
/usr/share/ceph-osd-run.shfile. For all other instances, edit the appropriate file in the/etc/systemd/system/multi-user.target.wants/directory. Replaceregistry.access.redhat.com/rhceph/<old-image>with
registry.access.redhat.com/rhceph/<new-image>b) Reload the systemd service:
# systemctl daemon-reload -
Start each daemon again:
systemctl start <daemon>@<ID>.serviceReplace
<daemon>withceph-mon,ceph-osd,ceph-rgw, orceph-mdsdepending on what type of Ceph daemon you are upgrading.For Monitor instances, replace
<ID>with the host name of the Monitor node, for example:# systemctl start ceph-mon@dhcp47-115.serviceFor OSD instances, replace
<ID>with the device on which the OSD instance is running:# systemctl start ceph-osd@sdb.serviceFor Ceph Object Gateway instances, replace
<ID>with the host name of the Ceph Object Gateway node, for example:# systemctl start ceph-rgw@dhcp47-115.serviceFor Metadata Server instances, replace
<ID>with the host name of the Metadata Server node, for example:# systemctl start ceph-mds@dhcp47-115.service
Before proceeding to upgrade another Ceph node, verify that the cluster health is OK. To do so, perform the following steps from the node with a Ceph Monitor container:
-
List all running containers:
# docker ps -
Verify that the cluster health is OK:
docker exec <container-name> ceph -sReplace
<container-name>with the name of the Ceph Monitor container found in the first step, for example:# docker exec ceph-mon0 ceph -s
NOTE
After upgrading the cluster, ensure the latest version of the ceph-ansible package is installed:
# yum update ceph-ansible
This will help you to use the latest ceph-ansible while managing your cluster after upgrading.
See the section Upgrading a Red Hat Ceph Storage cluster for more details.