How to set quota for emptyDir volume usage on an Openshift Node?
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 3
Issue
- How do I limit empty_dir volumes on a OpenShift node?
- /var/lib/origin/openshift.local.volumes is full why?
- How to check the left space of local storage in a quota?
Resolution
Note: For OpenShift 4 clusters, refer to How to limit ephemeral storage in OpenShift 4.
OpenShift does not limit the space an emptyDir volume a pod can use by default. There is preliminary support for local emptyDir volume quotas, set volumeConfig.localQuota.perFsGroup in the This page is not included, but the link has been rewritten to point to the nearest parent document.node-config.yaml file, to a value representing the desired quota per FSGroup, per node. (i.e. 1Gi, 512Mi, etc) Currently requires that the volumeDirectory be on an XFS filesystem mounted with the gquota option, and the matching security context contraint’s fsGroup type set to MustRunAs.
- Example node-config.yaml with
volumeConfig
3.9 and below.
# cat /etc/origin/node/node-config.yaml
volumeConfig:
localQuota:
perFSGroup: "5G"
volumeDirectory: /var/lib/origin/openshift.local.volumes
# systemctl restart atomic-openshift-node
3.10+ Due to This content is not included.Bug manual creation of this is needed in 3.10 and 3.11.
# cat > /etc/origin/node/volume-config.yaml << EOF
apiVersion: kubelet.config.openshift.io/v1
kind: VolumeConfig
localQuota:
perFSGroup: 10
EOF
# systemctl restart atomic-openshift-node
- Example fstab entry for volumeDirectory
# cat /etc/fstab
/dev/xyz / xfs defaults 0 0
/dev/vdc /var/lib/origin/openshift.local.volumes/ xfs defaults,gquota 0 0
-
If
gquotagives error,grpquotacan also be used instead. -
To check if the xfs group quota is enabled run:
$ xfs_quota -xc 'state' /var/lib/origin/openshift.local.volumes
User quota state on /var/lib/origin/openshift.local.volumes (/dev/vdc)
Accounting: OFF
Enforcement: OFF
Inode: #0 (0 blocks, 0 extents)
Group quota state on /var/lib/origin/openshift.local.volumes (/dev/vdc)
Accounting: ON
Enforcement: ON
Inode: #67 (3 blocks, 3 extents)
Project quota state on /var/lib/origin/openshift.local.volumes (/dev/vdc)
Accounting: OFF
Enforcement: OFF
Inode: #67 (3 blocks, 3 extents)
Blocks grace time: [7 days]
Inodes grace time: [7 days]
Realtime Blocks grace time: [7 days]
or
# xfs_quota -xc 'print' /var/lib/origin/openshift.local.volumes
Filesystem Pathname
/var/lib/origin/openshift.local.volumes /dev/vdc (gquota)
- Report quota
$ xfs_quota -xc 'report -h' /var/lib/origin/openshift.local.volumes
Group quota on /var/lib/origin/openshift.local.volumes (/dev/vdc)
Blocks
Group ID Used Soft Hard Warn/Grace
---------- ---------------------------------
root 16K 0 0 00 [------]
#1000020000 121M 512M 512M 00 [------]
#1000070000 23M 512M 512M 00 [------]
- Display the usage of mounted volume
$ xfs_quota -xc 'free -h' /var/lib/origin/openshift.local.volumes
Filesystem Size Used Avail Use% Pathname
/dev/vdc 20.0G 33.5M 20.0G 0% /var/lib/origin/openshift.local.volumes
Diagnostic Steps
- To make sure the quota for emptyDir volume is respected, check the "perFSGroup" parameter in all the cluster node's config file node-config.yaml has got same quota size.
volumeConfig:
localQuota:
perFSGroup: "512M"
volumeDirectory: /var/lib/origin/openshift.local.volumes
- After the pod is built , further test can be done
# oc rsh pod-name
Now create a file bigger than the quota size in mount point for emptyDir volume (setup in pod definition) .
For example /var/lib/origin/openshift.local.volumes has mounted in /data in pod.
# cd /data
# dd if=/dev/zero of=testquota bs=1M count=1000
It will create a file of 512M but will prompt error that the Disk quota has exceeded.
Now we can be sure of that the quota has been respected.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.