- Issued:
- 2018-01-11
- Updated:
- 2018-01-11
RHBA-2018:0083 - glusterfs bug fix update
Synopsis
glusterfs bug fix update
Type/Severity
Bug Fix Advisory None
Topic
Updated glusterfs packages that fix several bugs are now available for Red Hat Gluster Storage 3.3 Update 1 on Red Hat Enterprise Linux 7.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that provides flexible and affordable unstructured data storage. It unifies data storage and infrastructure, increases performance, and improves availability and manageability to meet enterprise-level storage challenges.
This advisory addresses critical bugs related to memory leaks and fixes the following bugs:
-
Executing gluster volume set operations on volumes from gluster CLI caused a significant memory leak in glusterd process. The memory leak issues have been fixed in this release. (BZ#1526363)
-
Previously, while executing a series of gluster commands like volume create/start/stop/delete, glusterd process' memory footprint increased. In a scalable environment where multiple volumes are created, deleted, and recreated led to high glusterd memory consumption. With this update, the identified memory leaks in glusterd are fixed. (BZ#1526365, BZ#1526374)
-
Executing multiple volume commands concurrently on the same volume, from different peers of the trusted storage pool, results in one of the glusterd processes from the same pool moving to the locked state. Any volume management operations performed on the same volume fail until the glusterd service is restarted on the node where glusterd is in transaction locked state. This release introduces a default transaction lock timeout of 3 minutes. Any glusterd process reaching the locked state will remain there for 3 minutes. Any transactions made now execute successfully after this timeout period. (BZ#1526372)
-
The portmap entry allocation that glusterd maintains for each of its brick is cleaned up on a graceful shutdown of the respective brick processes. When a brick process was killed through SIGKILL signal or crashed, glusterd did not clean up the respective portmap entry allocation which could result in two portmap allocations for the same brick when the brick process was restarted. glusterd reports with a stale port to the client, resulting failure in connection. With this fix, the stale port entry is removed even for a brick crash or SIGKILL event of a brick process. The client now successfully connects to the brick after the brick is restarted. (BZ#1526371)
-
When processes that use a large number of POSIX locks, like Samba, were used in combination with the gluster clear-locks command, a memory leak created high memory consumption on brick processes. This sometimes triggered OOM killer on brick processes. The lock translator has been updated to correct this issue and fix the memory leak. (BZ#1526377)
Users of glusterfs are advised to upgrade to these updated packages, which fix these bugs.
Solution
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
Affected Products
| Product | Version | Arch |
|---|---|---|
| Red Hat Virtualization | 4 | x86_64 |
| Red Hat Virtualization Host | 4 | x86_64 |
| Red Hat Gluster Storage Server for On-premise | 3 | x86_64 |
| Red Hat Enterprise Linux Server | 7 | x86_64 |
Updated Packages
- glusterfs-client-xlators-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-geo-replication-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-ganesha-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-libs-3.8.4-54.el7.x86_64.rpm
- glusterfs-rdma-3.8.4-54.el7.x86_64.rpm
- glusterfs-cli-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-server-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-libs-3.8.4-54.el7rhgs.x86_64.rpm
- python-gluster-3.8.4-54.el7rhgs.noarch.rpm
- glusterfs-api-devel-3.8.4-54.el7rhgs.x86_64.rpm
- python-gluster-3.8.4-54.el7.noarch.rpm
- glusterfs-events-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-api-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-debuginfo-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-api-devel-3.8.4-54.el7.x86_64.rpm
- glusterfs-devel-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-rdma-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-resource-agents-3.8.4-54.el7rhgs.noarch.rpm
- glusterfs-fuse-3.8.4-54.el7.x86_64.rpm
- glusterfs-3.8.4-54.el7rhgs.src.rpm
- glusterfs-api-3.8.4-54.el7.x86_64.rpm
- glusterfs-fuse-3.8.4-54.el7rhgs.x86_64.rpm
- glusterfs-client-xlators-3.8.4-54.el7.x86_64.rpm
- glusterfs-cli-3.8.4-54.el7.x86_64.rpm
- glusterfs-3.8.4-54.el7.src.rpm
- glusterfs-3.8.4-54.el7.x86_64.rpm
- glusterfs-debuginfo-3.8.4-54.el7.x86_64.rpm
- glusterfs-devel-3.8.4-54.el7.x86_64.rpm
Fixes
- This content is not included.BZ - 1526363
- This content is not included.BZ - 1526368
- This content is not included.BZ - 1526371
- This content is not included.BZ - 1526372
- This content is not included.BZ - 1526373
- This content is not included.BZ - 1527147
- This content is not included.BZ - 1527772
- This content is not included.BZ - 1530217
- This content is not included.BZ - 1530320
CVEs
(none)
References
(none)
Additional information
- The Red Hat security contact is This content is not included.secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.
- Offline Security Data data is available for integration with other systems. See Offline Security Data API to get started.