- Issued:
- 2018-08-09
- Updated:
- 2018-08-09
RHBA-2018:2375 - Red Hat Ceph Storage 3.0 Bug Fix and Enhancement Update
Synopsis
Red Hat Ceph Storage 3.0 Bug Fix and Enhancement Update
Type/Severity
Bug Fix Advisory None
Topic
An update for ceph is now available for Red Hat Ceph Storage 3.0 for Red Hat Enterprise Linux 7.
Description
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Bug Fix(es):
-
Ceph installation no longer fails when using NVME disk. (BZ#1541016)
-
In the Ceph Object Gateway, a buffer used to transfer incoming PUT data was incorrectly sized at the maximum chunk value of 4 MB. This lead to a space leak of unused buffer space when PUTs of smaller objects were processed. RGW could leak space when processing large numbers of PUT requests less than 4M in size. With this update, the incorrect buffer sizing logic has been fixed. (BZ#1595937)
-
OSD memory usage has been reduced to avoid unnecessary usage, especially for Ceph Object Gateway workloads. (BZ#)1599856
-
Ceph Object Gateway multisite sync's sync error log could grow in size during transient sync failures, but no mechanism was provided to remove old entries. With this update, a "sync error trim" subcommand has been added to "radosgw-admin". Now, large sync error logs can be trimmed. (BZ#1600701)
-
In rare circumstances after a PG has been repaired and the primary changes, the inconsistent state falsely reappeared, even without a scrub being performed. This bug fix cleans up stray scrub error counts to prevent this. (BZ#1601080)
-
In cluster configurations with multiple active metadata servers, it was possible for an MDS to become stuck in "up:resolve" during recovery. With this update, MDSs no longer miss updates from the Monitors that indicated another MDS failed, and MDSs continue recovery. (BZ#1601138)
-
A normal result of subtree migration, missing inodes in the MDS cache, was broadcasted as a verbose debug log message. With this update, the debug log message has been reduced, and the MDS debug logs will have fewer entries in normal operation. (BZ#1607596)
-
An MDS now dumps the MDSMap it is processing from the monitors at a low debug setting to assist with evaluating issues in multiple active metadata server configurations. (BZ#1607606)
-
The default values for "libcurl" slow request handling could lead to multisite sync hangs during sync. Now non-default low-speed timeouts for "libssl" may be specified to avoid multisite sync hangs. (BZ#1608977)
-
The MDS now dumps recent log events in memory but not necessarily logged to the debug log file when the MDS respawns because the monitors have removed it from the MDSMap. This may occur when the MDS becomes stuck working on a long running operation. This is to help evaluate what the MDS was doing that resulted in the respawn. (BZ#1607601)
-
"EEXIST" condition handling has been corrected in "RGWPutObj::execute()". (BZ#1609005)
Enhancement(s):
-
For S3 and Swift protocols, an option to list buckets/containers in natural order has been added. Listing containers in sorted order is canonical in both protocols, but is costly, and not required by some client applications. The performance and workload cost of S3 and Swift bucket/container listings is reduced for sharded buckets/containers when the "allow_unordered" extension is used. (BZ#1595942)
-
An asynchronous mechanism for executing the Ceph Object Gateway garbage collection using the "librados" APIs has been introduced. The original garbage collection mechanism serialized all processing, and lagged behind applications in specific workloads. Garbage collection performance has been significantly improved, and can be tuned to specific site requirements. (BZ#1595946)
-
A "trim delay" option has been added to the "radosgw-admin sync error trim" command in Ceph Object Gateway multisite. Previously, many OMAP keys could have been deleted by the full operation, leading to potential for impact on client workload. With the next option, trimming can be requested with low client workload impact. (BZ#1600702)
Solution
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
Affected Products
| Product | Version | Arch |
|---|---|---|
| Red Hat Enterprise Linux Server | 7 | x86_64 |
| Red Hat Ceph Storage OSD | 3 | x86_64 |
| Red Hat Ceph Storage MON | 3 | x86_64 |
Updated Packages
- ceph-common-12.2.4-42.el7cp.x86_64.rpm
- ceph-fuse-12.2.4-42.el7cp.x86_64.rpm
- ceph-mon-12.2.4-42.el7cp.x86_64.rpm
- cephmetrics-ansible-1.0.2-1.el7cp.x86_64.rpm
- ceph-osd-12.2.4-42.el7cp.x86_64.rpm
- python-rados-12.2.4-42.el7cp.x86_64.rpm
- ceph-selinux-12.2.4-42.el7cp.x86_64.rpm
- librados2-12.2.4-42.el7cp.x86_64.rpm
- librgw2-12.2.4-42.el7cp.x86_64.rpm
- librbd-devel-12.2.4-42.el7cp.x86_64.rpm
- ceph-test-12.2.4-42.el7cp.x86_64.rpm
- librbd1-12.2.4-42.el7cp.x86_64.rpm
- librgw-devel-12.2.4-42.el7cp.x86_64.rpm
- python-rgw-12.2.4-42.el7cp.x86_64.rpm
- ceph-mgr-12.2.4-42.el7cp.x86_64.rpm
- libcephfs-devel-12.2.4-42.el7cp.x86_64.rpm
- cephmetrics-grafana-plugins-1.0.2-1.el7cp.x86_64.rpm
- ceph-radosgw-12.2.4-42.el7cp.x86_64.rpm
- ceph-base-12.2.4-42.el7cp.x86_64.rpm
- python-cephfs-12.2.4-42.el7cp.x86_64.rpm
- libcephfs2-12.2.4-42.el7cp.x86_64.rpm
- cephmetrics-1.0.2-1.el7cp.x86_64.rpm
- python-rbd-12.2.4-42.el7cp.x86_64.rpm
- cephmetrics-1.0.2-1.el7cp.src.rpm
- ceph-debuginfo-12.2.4-42.el7cp.x86_64.rpm
- rbd-mirror-12.2.4-42.el7cp.x86_64.rpm
- ceph-mds-12.2.4-42.el7cp.x86_64.rpm
- libradosstriper1-12.2.4-42.el7cp.x86_64.rpm
- cephmetrics-collectors-1.0.2-1.el7cp.x86_64.rpm
- librados-devel-12.2.4-42.el7cp.x86_64.rpm
- ceph-12.2.4-42.el7cp.src.rpm
Fixes
- This content is not included.BZ - 1541016
- This content is not included.BZ - 1576551
- This content is not included.BZ - 1590450
- This content is not included.BZ - 1593031
- This content is not included.BZ - 1593093
- This content is not included.BZ - 1593100
- This content is not included.BZ - 1593123
- This content is not included.BZ - 1593311
- This content is not included.BZ - 1593322
- This content is not included.BZ - 1593329
- This content is not included.BZ - 1593335
- This content is not included.BZ - 1594278
- This content is not included.BZ - 1594283
- This content is not included.BZ - 1594307
- This content is not included.BZ - 1594323
- This content is not included.BZ - 1594457
- This content is not included.BZ - 1594604
- This content is not included.BZ - 1594616
- This content is not included.BZ - 1594620
- This content is not included.BZ - 1594674
- This content is not included.BZ - 1594741
- This content is not included.BZ - 1594868
- This content is not included.BZ - 1595937
- This content is not included.BZ - 1595942
- This content is not included.BZ - 1595946
- This content is not included.BZ - 1599856
- This content is not included.BZ - 1600702
- This content is not included.BZ - 1601138
- This content is not included.BZ - 1607583
- This content is not included.BZ - 1607596
- This content is not included.BZ - 1607601
- This content is not included.BZ - 1607606
- This content is not included.BZ - 1609005
- This content is not included.BZ - 1609006
- This content is not included.BZ - 1611056
CVEs
(none)
References
(none)
Additional information
- The Red Hat security contact is This content is not included.secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.
- Offline Security Data data is available for integration with other systems. See Offline Security Data API to get started.