- Issued:
- 2023-02-14
- Updated:
- 2023-02-14
RHBA-2023:0764 - Red Hat OpenShift Data Foundation 4.11.5 Bug Fix Update
Synopsis
Red Hat OpenShift Data Foundation 4.11.5 Bug Fix Update
Type/Severity
Bug Fix Advisory None
Topic
Updated images that fix several bugs are now available for Red Hat OpenShift Data Foundation 4.11.5 on Red Hat Enterprise Linux 8 from Red Hat Container registry.
Description
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Data Foundation. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Bug fix(es):
-
Previously, false alarms for storage capacity based on inodes metrics were raised. This was because the metrics to raise the alerts provided a storage capacity status that could be dynamically changed without any intervention when more storage space was required. When a PVC uses CephFs as storage backend, the inodes metrics such as
kubelet_volume_stats_inodes_free,kubelet_volume_stats_inodes,kubelet_volume_stats_inodes_usedare not correct because in CephFs inodes get allocated on-demand. By design, CephFS is a filesystem with dynamic inode allocations. With this fix, metrics forkubelet_volume_stats_inodes_free,kubelet_volume_stats_inodes, andkubelet_volume_stats_inodes_usedare not provided for CephFS backed PVCs. As a result, false alarms for storage capacity based on inodes metrics are not raised. (BZ#2149676) -
Previously, persistent volumes (PVs) that used CephFS did not provide accurate statistics about consumed or free inodes as the number of free inodes on CephFS volume is not relevant because new inodes get created when required. So, the metrics that suggested running out of inodes did not provide accurate information. With this fix, Ceph-CSI does not return metrics about inodes on CephFS. This prevents erroneous alerting about running low or out of inodes. (BZ#2149677)
-
Previously, the listing operation would fail depending on the number of objects in the bucket due to incorrect mapping of indexes in the Multicloud Object Gateway database (MCG DB). This incorrect mapping caused certain queries to take longer time than needed and fail the specific actions as a result. With this fix, the indexes are updated to fix the listing queries. (BZ#2149226)
-
Previously, when an OSD is restarted after a node restart, the OSD would be marked as
downin Ceph instead of coming back online as it looked like a stale OSD. This was because, in some environments, Ceph OSD was not running as PID 1, which resulted in a non-randomnoncebeing used to start the OSD. With this fix, an environment variable,CEPH_USE_RANDOM_NONCEis set on the OSD pods to ensure Ceph is always aware that OpenShift Data Foundation is running in a containerized environment and to randomize thenonce. Hence, the OSDs start properly after a node restart. (BZ#2150410) -
Previously, the
rook-ceph-osd-preparejob sometimes would be stuck inCrashLoopBackOff(CLBO) state and would never come up. This is due to the deletion of OSD deployment in an encrypted cluster backed by CSI provisioned PVC which causes therook-ceph-osd-preparejob for that OSD to be stuck inCrashLoopBackOffstate. With this fix, therook-ceph-osd-preparejob removes the stale encrypted device and opens it again avoiding the CLBO state. As a result, therook-ceph-osd-preparejob runs as expected and the OSD comes up. (BZ#2153675)
All users of Red Hat OpenShift Data Foundation are advised to upgrade to these updated images which provide these bug fixes.
Solution
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
Affected Products
| Product | Version | Arch |
|---|---|---|
| Red Hat OpenShift Data Foundation | 4 | x86_64 |
| Red Hat OpenShift Data Foundation for IBM Z and LinuxONE | 4 | s390x |
| Red Hat OpenShift Data Foundation for IBM Power, little endian | 4 | ppc64le |
Fixes
- This content is not included.BZ - 2135631
- This content is not included.BZ - 2142901
- This content is not included.BZ - 2149226
- This content is not included.BZ - 2149676
- This content is not included.BZ - 2149677
- This content is not included.BZ - 2151138
- This content is not included.BZ - 2151914
- This content is not included.BZ - 2153675
- This content is not included.BZ - 2168566
CVEs
- CVE-2021-46848
- CVE-2022-24785
- CVE-2022-35737
- CVE-2022-40303
- CVE-2022-40304
- CVE-2022-42010
- CVE-2022-42011
- CVE-2022-42012
- CVE-2022-43680
- CVE-2023-22809
References
(none)
Additional information
- The Red Hat security contact is This content is not included.secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.
- Offline Security Data data is available for integration with other systems. See Offline Security Data API to get started.