- Issued:
- 2014-11-13
- Updated:
- 2014-11-13
RHBA-2014:1853 - Red Hat Storage 2.1 enhancement and bug fix update #5
Synopsis
Red Hat Storage 2.1 enhancement and bug fix update #5
Type/Severity
Bug Fix Advisory
Topic
Updated glusterfs, gluster-nfs, glusterfs-fuse, glusterfs-geo-replication, and redhat-storage-server packages that fix multiple bugs are now available for use with the Red Hat Storage Server 2.1.
Description
Red Hat Storage is software-only, scale-out storage that provides flexible and affordable unstructured data storage for an enterprise. GlusterFS, a key building block of Red Hat Storage, is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnections into one large, parallel network file system.
This advisory addresses the following bugs:
-
Previously, running the setfacl command on files and directories on the NFS mount point caused a memory leak in the Gluster NFS server. When the command was run on a large number of files, the out of memory killer would get invoked, eventually terminating the Gluster NFS server. With this fix, the possibility of a memory leak is fixed and the command works as expected. (BZ#1125658)
-
Previously, upon the Red Hat Storage server reboot, the Gluster Management Daemon checked for the "rebalance command" value instead of the "rebalance operation status" value to determine whether to restart the rebalance process. With this fix, the Gluster Management Daemon checks the rebalance operation status and determines an appropriate action. As a result, in the event of a server reboot, a rebalance process that is complete, would not restart.(BZ#1136310)
-
Previously, if a file was linked by one client and removed by another, the subsequent lookup operation from the first client did not override the cache in the absence of the unlinked name on the bricks, leading it to conclude that the file name exists. With this fix, the stale inode mapping is deleted when the lookup operation fails with the ENOENT error in the first client. (BZ#1122649)
-
Previously, When a file was deleted from a master volume after being renamed, the rename system call processed it internally as an UNLINK operation in the slave volume. As a result, if a file was created with the same name in the master volume, it did not get synchronized to the slave volume and the file had the old GFID associated with it. With this release, the rename system call is handled appropriately and the rename operation synchronizes from the master volume to the slave volume as expected. (BZ#1060683)
-
Previously, if there was an attempt to delete a non-existent file, the GFID-based access returned a ESTALE error code instead of ENOENT. The ESTALE error was not handled properly in Geo-replication causing the Geo-replication worker thread to fail upon such an attempt. With this release, the error codes are handled appropriately and the Geo-replication worker thread does not fail with an ESTALE error.(BZ#1129392)
-
Previously, due to a race condition between the lookup operations of multiple rebalance processes spawned on each of the servers, each rebalance process migrated a file only if the file was present on the server where it is spawned. The rebalance process copied a file from the source to destination and on the destination directory, it created a linkto file. During a small window of time, the lookup operation of the other rebalance process assumed the linkto file to be a dangling reference and unlinked it; during which the linkto file was converted to a data file leading to data loss. With this fix, the race conditions are handled properly and the data loss is not observed. (BZ#1115937)
-
Previously, there was a 100% CPU utilization and continuous memory allocation that made the Gluster process unusable and caused a very high load on the Red Hat Storage Server and possibly rendering it unresponsive to other requests. This was due to the parsing of a Remote Procedure call (RPC) packet containing a continuation RPC-record causing an infinite loop in the receiving Gluster process. With this release, such RPC-records are handled appropriately and do not lead to service disruptions. (BZ#1146466)
Users of Red Hat Storage are advised to upgrade to these updated packages, which fix these issues.
Solution
Before applying this update, make sure all previously released errata relevant to your system have been applied.
This update is available via the Red Hat Network. Details on how to use the Red Hat Network to apply this update are available at https://access.redhat.com/articles/11258
Affected Products
| Product | Version | Arch |
|---|---|---|
| Red Hat Storage for Public Cloud (via RHUI) | 2.1 | x86_64 |
| Red Hat Gluster Storage Server for On-premise | 2.1 | x86_64 |
Updated Packages
- glusterfs-server-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-rdma-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-fuse-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-devel-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-api-devel-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-geo-replication-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- redhat-storage-server-2.1.5.0-1.el6rhs.src.rpm
- redhat-storage-server-2.1.5.0-1.el6rhs.noarch.rpm
- glusterfs-api-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-libs-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-debuginfo-3.4.0.70rhs-1.el6rhs.x86_64.rpm
- glusterfs-3.4.0.70rhs-1.el6rhs.src.rpm
Fixes
- This content is not included.BZ - 1122649
- This content is not included.BZ - 1144413
- This content is not included.BZ - 1146889
- This content is not included.BZ - 1146895
- This content is not included.BZ - 1154019
- This content is not included.BZ - 1157705
- This content is not included.BZ - 1159279
- This content is not included.BZ - 1159280
CVEs
(none)
References
- This content is not included.This content is not included.https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Technical_Notes/index.html
Additional information
- The Red Hat security contact is This content is not included.secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.
- Offline Security Data data is available for integration with other systems. See Offline Security Data API to get started.