Issued:
2018-11-27
Updated:
2018-11-27

RHBA-2018:3689 - Red Hat Ceph Storage 2.5 bug fix update


Synopsis

Red Hat Ceph Storage 2.5 bug fix update

Type/Severity

Bug Fix Advisory None

Topic

An update is now available for Red Hat Ceph Storage 2.5.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Bug Fixes:

  • Previously, upgrading from Red Hat Ceph Storage 1.3 to 2 could take a significant amount of time because the fixfiles utility was restoring SELinux context of all files in the /var/lib/ceph/osd/ directory sequentially. With this update, the ceph-disk utility is used to restore the SELinux context of the files in parallel per OSD, which makes the upgrading process significantly faster on systems with multiple OSDs. (BZ#1599359)

  • Previously, bucket access control lists (ACLs) and other attributes were not correctly restored after the bucket link step in offline resharding. The underlying source code has been modified, and the bucket ACLs and other attributes are restored as expected in this situation. (BZ#1534992)

  • Restarting OSD daemons, for example for rolling updates, could result in an inconsistent internal state within librbd clients with the exclusive lock feature enabled. As a consequence, live migration of virtual machines (VMs) using RBD images could time out because the source VM would refuse to release its exclusive lock on the RBD image. This bug has been fixed, and the live migration proceeds as expected. (BZ#1566723)

  • When a Monitor failed to check for non-existent snapshots or snapshots that were already removed, the Monitor terminated unexpectedly. This update improves Monitor error handling. As a result, the Monitor now returns the ENOENT error message instead of trying to remove a non-existent snapshot. (BZ#1630898)

  • The ceph-ansible utility previously required all placement groups (PGs) in a cluster to be in the active+clean state. Consequently, the noscrub flag had to be set before upgrading the cluster to prevent PGs to be in the active+clean+scrubbing state. With this update, ceph-ansible allows upgrading a cluster even when the cluster is scrubbing. (BZ#1637038)

  • Previously, the rolling_update.yml playbook failed to update clusters that were deployed with the mon_use_fqdn parameter set to true. The playbook attempted to create or restart a systemctl service called "ceph-mon@'hostname -s'.service" but the service that was actually running was "ceph-mon@'hostname -f'.service". This update improves the rolling_update.yml playbook, and updating such clusters now works as expected. (BZ#1644852)

Solution

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

ProductVersionArch
Red Hat Enterprise Linux for Scientific Computing7x86_64
Red Hat Enterprise Linux Workstation7x86_64
Red Hat Enterprise Linux Server7x86_64
Red Hat Enterprise Linux Desktop7x86_64
Red Hat Ceph Storage OSD2x86_64
Red Hat Ceph Storage MON2x86_64

Updated Packages

  • ceph-common-10.2.10-43.el7cp.x86_64.rpm
  • ceph-mon-10.2.10-43.el7cp.x86_64.rpm
  • ceph-test-10.2.10-43.el7cp.x86_64.rpm
  • librados2-10.2.10-43.el7cp.x86_64.rpm
  • librbd1-10.2.10-43.el7cp.x86_64.rpm
  • librbd1-devel-10.2.10-43.el7cp.x86_64.rpm
  • ceph-fuse-10.2.10-43.el7cp.x86_64.rpm
  • libcephfs1-devel-10.2.10-43.el7cp.x86_64.rpm
  • librgw2-devel-10.2.10-43.el7cp.x86_64.rpm
  • ceph-debuginfo-10.2.10-43.el7cp.x86_64.rpm
  • librados2-devel-10.2.10-43.el7cp.x86_64.rpm
  • librgw2-10.2.10-43.el7cp.x86_64.rpm
  • python-cephfs-10.2.10-43.el7cp.x86_64.rpm
  • python-rbd-10.2.10-43.el7cp.x86_64.rpm
  • rbd-mirror-10.2.10-43.el7cp.x86_64.rpm
  • ceph-ansible-3.0.47-1.el7cp.src.rpm
  • ceph-radosgw-10.2.10-43.el7cp.x86_64.rpm
  • ceph-10.2.10-43.el7cp.src.rpm
  • ceph-base-10.2.10-43.el7cp.x86_64.rpm
  • ceph-osd-10.2.10-43.el7cp.x86_64.rpm
  • libcephfs1-10.2.10-43.el7cp.x86_64.rpm
  • python-rados-10.2.10-43.el7cp.x86_64.rpm
  • ceph-ansible-3.0.47-1.el7cp.noarch.rpm
  • ceph-selinux-10.2.10-43.el7cp.x86_64.rpm
  • ceph-mds-10.2.10-43.el7cp.x86_64.rpm

Fixes

CVEs

(none)

References

(none)


Additional information