Does GFS or GFS2 lock all the directories in the file-path for a file when it is created, deleted, or modified?
Environment
- Red Hat Enterprise Linux Server 5 (with the High Availability and Resilient Storage Add Ons)
- Red Hat Enterprise Linux Server 6 (with the High Availability and Resilient Storage Add Ons)
- Red Hat Enterprise Linux Server 7 (with the High Availability and Resilient Storage Add Ons)
- Global File System (GFS)
- Global File System 2 (GFS2)
Issue
- Does
GFSorGFS2lock all the directories in the file-path for a file when it is created, deleted, or modified? - If one process want to write to a file on
GFSorGFS2file-system, how many exclusive lock will be acquired?
Resolution
When a file is created, or deleted, an exclusive lock (EX) is used on the parent directory of that file. When you are modifying a file then a shared lock (SH) will be used on the parent directory to find the file, but once the file is found an EX lock will be used on the actual file (or inode) and the parent directory will not be locked.
If you are doing lots of reads/writes to a particular directory from multiple nodes, then lock contention will come into play when:
- The cluster nodes want to write to the same file.
- The cluster nodes are creating or deleting files in the same directory.
- One cluster node is writing to a file while others cluster nodes are reading from it (all the readers will have to wait on the writer).
- One node node is creating/deleting files, or directories, in a directory that others are listing contents (
lscommand for example) or scanning files in (findcommand for example). Recursive operations on a directory can cause performance issues. - Modifying the parent directory of the files will require the parent directory to be locked and so, there will not be any creation, deletion, or modifying of files operations until the modifying of the directory completes.
If all the nodes are writing to different (not creating or deleting) files in the same directory, there should not be any lock contention.
If you have a cluster node (or cluster nodes) that is examining all of directory contents, creating new files, and/or deleting files, those are all going to be competing for glocks on the directory, as well as any files on which their activity overlaps. Workloads with one or more writers in a directory (and by that I mean directory-writers) sharing with many readers means that those readers have to frequently drop their caches and reread it in order to allow the exclusive glock to be granted to the writer. We recommend, in general, splitting workloads into their own directories so that cluster nodes are not competing for glocks.
For more information:
- What are some examples of GFS & GFS2 workloads that should be avoided?
- How can I use the GFS2 tracepoints and debugfs glocks
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.