Issued:
2018-04-26
Updated:
2018-04-26

RHBA-2018:1259 - Red Hat Ceph Storage 3.0 Bug Fix update


Synopsis

Red Hat Ceph Storage 3.0 Bug Fix update

Type/Severity

Bug Fix Advisory None

Topic

An update is now available for Red Hat Ceph Storage 3.0.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Bug Fixes:

  • Previously, an attempt to delete a large RBD image with the "object map" feature enabled could cause the OSD nodes to trigger the "suicide_timeout" and self-terminate. With this update, deleting large RBD images with "object map" no longer causes OSDs to crash. (BZ#1325322)

  • Previously, Metadata Server (MDS) daemons could get behind on trimming for large metadata workloads in larger clusters. With this update, MDS no longer gets behind on trimming for large metadata workloads. (BZ#1507629)

  • Previously in the Ceph File System, Ceph Metadata Server (MDS) daemons would sometimes crash when scrubbing encountered an inode that had been renamed. With this update, MDS daemons no longer crash and scrubbing continues when encountering renamed inodes.(BZ#1518730)

  • Previously, deployment of only one Ceph iSCSI Gateway node was supported because SSL certificates were only generated for one Ceph iSCSI Gateway node when using the "ceph-ansible" utility. This issue has been fixed, the certificates are distributed across all the gateways and more than one iSCSI Gateway node can now be deployed. (BZ#1540845)

  • Previously, the Ceph File System (CephFS) client metadata capability trimming would sometimes fail assertions due to logic errors, and the CephFS client would abort. With this update, the logic error has been corrected and the CephFS client completes trimming. (BZ#1541424)

  • Previously, when using "ceph-ansible" with the "copy_admin_key" option set to "true", the administrator's keyring would not copy to the other nodes in the Ceph Storage Cluster. With this update, the "copy_admin_key" option works as expected when set to "true". (BZ#1544720)

  • Previously, an underlying issue caused the placement group (PG) log to grow without bound in certain situations. This caused OSDs to crash and stalled startup. With this release, the "trim-pg-log" operation has been added to "ceph-objectstore-tool" to allow offline trimming of large PG logs. This restores the PG log within the designated size limits. (BZ#1552094)

  • Previously, a placement group (PG) that received only unsuccessful writes—such as deleting a non-existing object—would grow its write operations log indefinitely. This caused the OSDs handling that PG to run out of memory and crash. With this fix, the PG log is now trimmed as expected regardless of whether a write is successful or not, and the OSDs do not run out of memory. (BZ#1554544)

Solution

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

ProductVersionArch
Red Hat Enterprise Linux Server7x86_64
Red Hat Ceph Storage OSD3x86_64
Red Hat Ceph Storage MON3x86_64

Updated Packages

  • ceph-base-12.2.4-6.el7cp.x86_64.rpm
  • nfs-ganesha-debuginfo-2.5.5-3.el7cp.x86_64.rpm
  • librgw-devel-12.2.4-6.el7cp.x86_64.rpm
  • rbd-mirror-12.2.4-6.el7cp.x86_64.rpm
  • ceph-debuginfo-12.2.4-6.el7cp.x86_64.rpm
  • ceph-mgr-12.2.4-6.el7cp.x86_64.rpm
  • nfs-ganesha-2.5.5-3.el7cp.x86_64.rpm
  • ceph-selinux-12.2.4-6.el7cp.x86_64.rpm
  • python-rados-12.2.4-6.el7cp.x86_64.rpm
  • ceph-mds-12.2.4-6.el7cp.x86_64.rpm
  • nfs-ganesha-rgw-2.5.5-3.el7cp.x86_64.rpm
  • librbd-devel-12.2.4-6.el7cp.x86_64.rpm
  • ceph-test-12.2.4-6.el7cp.x86_64.rpm
  • libntirpc-debuginfo-1.5.5-1.el7.x86_64.rpm
  • libntirpc-1.5.5-1.el7.src.rpm
  • librados2-12.2.4-6.el7cp.x86_64.rpm
  • librbd1-12.2.4-6.el7cp.x86_64.rpm
  • ceph-radosgw-12.2.4-6.el7cp.x86_64.rpm
  • nfs-ganesha-ceph-2.5.5-3.el7cp.x86_64.rpm
  • librados-devel-12.2.4-6.el7cp.x86_64.rpm
  • libcephfs2-12.2.4-6.el7cp.x86_64.rpm
  • ceph-ansible-3.0.31-1.el7cp.noarch.rpm
  • python-rgw-12.2.4-6.el7cp.x86_64.rpm
  • ceph-common-12.2.4-6.el7cp.x86_64.rpm
  • ceph-osd-12.2.4-6.el7cp.x86_64.rpm
  • libradosstriper1-12.2.4-6.el7cp.x86_64.rpm
  • nfs-ganesha-2.5.5-3.el7cp.src.rpm
  • ceph-ansible-3.0.31-1.el7cp.src.rpm
  • ceph-12.2.4-6.el7cp.src.rpm
  • librgw2-12.2.4-6.el7cp.x86_64.rpm
  • ceph-mon-12.2.4-6.el7cp.x86_64.rpm
  • python-rbd-12.2.4-6.el7cp.x86_64.rpm
  • libntirpc-1.5.5-1.el7.x86_64.rpm
  • ceph-fuse-12.2.4-6.el7cp.x86_64.rpm
  • libcephfs-devel-12.2.4-6.el7cp.x86_64.rpm
  • python-cephfs-12.2.4-6.el7cp.x86_64.rpm

Fixes

CVEs

(none)

References

(none)


Additional information