Issued:
2018-03-08
Updated:
2018-03-08

RHBA-2018:0474 - Red Hat Ceph Storage 3.0 bug fix update


Synopsis

Red Hat Ceph Storage 3.0 bug fix update

Type/Severity

Bug Fix Advisory None

Topic

An update is now available for Red Hat Ceph Storage 3.0.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Bug Fixes:

  • Previously, an attempt to use the --cluster option with the "ceph tell mds" commands fails with the RuntimeError exception message. With this release, the --cluster option works as expected when using a non-default cluster name. (BZ#1491170)

  • Previously, when attempting to install a Metadata Server (MDS) node in a cluster that runs on Ubuntu, the installation failed because the ceph-ansible utility could not install the ceph-common package. With this release, ceph-ansible installs ceph-common as expected. As a result, the installation no longer fails in this case. (BZ#1516457)

  • Previously, due to an off-by-one error in expiration processing in the Ceph Object Gateway, objects eligible for expiration could infrequently be passed over, and consequently were not removed. The underlying source code has been modified, and the objects are no longer passed over. (BZ#1530673)

  • When doing a Simple Storage Service (S3) upload, if the Content-Type field was missing from the policy part of the upload, the Ceph Object Gateway refused the upload with a 403 error:

    Policy missing condition: Content-Type

With this update, the S3 POST policy does not require the Content-Type field. (BZ#1530775)

  • Previously, under certain circumstances, deleted objects were incorrectly interpreted as incomplete delete transactions because of an incorrect time. As a consequence, the delete operations were reported successful in the Ceph Object Gateway logs, but the deleted objects were not correctly removed from bucket indexes. The incorrect time comparison has been fixed, and deleting objects works correctly. (BZ#1530784)

  • Previously, when making changes to the same bucket through multiple Ceph Object Gateways, under certain circumstances under heavy load, the Ceph Object Gateway returned a 500 error. With this release, the chances are reduced to cause a race condition. (BZ#1530801)

  • Previously, a server-side copy mishandled object names starting with an underscore. This led to objects being created with two leading underscores. The Ceph Object Gateway code has been fixed to properly handle leading underscores. As a result, objects names with leading underscores behave correctly. (BZ#1531279)

  • Previously, when using Full Qualified Domain Names (FQDN) in the "/etc/hostname" file for containerized Ceph deployments they would fail when installing and upgrading Ceph using the ceph-ansible playbook. With this release the installation or upgrading Ceph no longer fails when using a FQDN. (BZ#1546834)

  • Previously, if the active Ceph Manager node is not the first node to be upgraded, when running the ceph-ansible rolling update playbook, then a required restart script was not copied to the Ceph Manager node. This would cause the rolling update to fail. In this release, the required script does get copied to the Ceph Manager node. (BZ#1548357)

  • Ceph Storage Clusters that have large omap databases experience slow OSD startup due to scanning and repairing during the upgrade from Red Hat Ceph Storage 3.0 to 3.0.1. The rolling update may take longer than the specified time out of 5 minutes. Before running the Ansible rolling_update.yml playbook, set the handler_health_osd_check_delay option to 180 in the group_vars/all.yml file. (BZ#1549293)

Solution

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

ProductVersionArch
Red Hat Enterprise Linux Server7x86_64
Red Hat Ceph Storage OSD3x86_64
Red Hat Ceph Storage MON3x86_64

Updated Packages

  • librgw-devel-12.2.1-45.el7cp.x86_64.rpm
  • libcephfs2-12.2.1-45.el7cp.x86_64.rpm
  • ceph-test-12.2.1-45.el7cp.x86_64.rpm
  • librbd-devel-12.2.1-45.el7cp.x86_64.rpm
  • ceph-base-12.2.1-45.el7cp.x86_64.rpm
  • ceph-osd-12.2.1-45.el7cp.x86_64.rpm
  • ceph-debuginfo-12.2.1-45.el7cp.x86_64.rpm
  • ceph-fuse-12.2.1-45.el7cp.x86_64.rpm
  • ceph-ansible-3.0.27-1.el7cp.src.rpm
  • librados-devel-12.2.1-45.el7cp.x86_64.rpm
  • cephmetrics-1.0-8.el7cp.src.rpm
  • python-rados-12.2.1-45.el7cp.x86_64.rpm
  • librados2-12.2.1-45.el7cp.x86_64.rpm
  • ceph-mds-12.2.1-45.el7cp.x86_64.rpm
  • ceph-mon-12.2.1-45.el7cp.x86_64.rpm
  • ceph-radosgw-12.2.1-45.el7cp.x86_64.rpm
  • cephmetrics-grafana-plugins-1.0-8.el7cp.x86_64.rpm
  • ceph-selinux-12.2.1-45.el7cp.x86_64.rpm
  • ceph-common-12.2.1-45.el7cp.x86_64.rpm
  • libradosstriper1-12.2.1-45.el7cp.x86_64.rpm
  • python-cephfs-12.2.1-45.el7cp.x86_64.rpm
  • python-rbd-12.2.1-45.el7cp.x86_64.rpm
  • ceph-ansible-3.0.27-1.el7cp.noarch.rpm
  • ceph-mgr-12.2.1-45.el7cp.x86_64.rpm
  • rbd-mirror-12.2.1-45.el7cp.x86_64.rpm
  • cephmetrics-1.0-8.el7cp.x86_64.rpm
  • libcephfs-devel-12.2.1-45.el7cp.x86_64.rpm
  • librbd1-12.2.1-45.el7cp.x86_64.rpm
  • ceph-12.2.1-45.el7cp.src.rpm
  • cephmetrics-collectors-1.0-8.el7cp.x86_64.rpm
  • cephmetrics-ansible-1.0-8.el7cp.x86_64.rpm
  • python-rgw-12.2.1-45.el7cp.x86_64.rpm
  • librgw2-12.2.1-45.el7cp.x86_64.rpm

Fixes

CVEs

(none)

References

(none)


Additional information