How to change the default PVS limit in Openshift Container Storage
Updated
Starting with OCS-3.11.4, the current max limit on the number of PV's in a 3 nodes cluster changes from 1000 to 2000. Although this is not the default limit, default limit still exist at 1000 but the users can tune their cluster using this article and go beyond 1000(up to 2000).
Follow the steps below to bump the limit to 2000
Tuning the nodes
-
Kernel parameters need to be tuned on the nodes that host the glusterfs daemonsets pods.
- For each of the glusterfs daemonset deployed, find the nodes that belong to it. In the example below, there are two glusterfs daemonsets, one for infra and one for app storage.
[root@master1 ~]# oc get ds --all-namespaces | grep glusterfs app-storage glusterfs-storage 3 3 3 3 3 glusterfs=storage-host 4d infra-storage glusterfs-registry 3 3 3 3 3 glusterfs=registry-host 4d [root@master1 ~]# oc get nodes --show-labels | grep 'glusterfs=storage-host' compute1 Ready compute 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage- host,kubernetes.io/hostname=compute1,logging-infra-fluentd=true,node-role.kubernetes.io/compute=true compute2 Ready compute 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=compute2,logging-infra-fluentd=true,node-role.kubernetes.io/compute=true compute3 Ready compute 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=compute3,logging-infra-fluentd=true,node-role.kubernetes.io/compute=true [root@master1 ~]# oc get nodes --show-labels | grep 'glusterfs=registry-host' infra1 Ready infra 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=registry-host,kubernetes.io/hostname=infra1,logging-infra-fluentd=true,node-role.kubernetes.io/infra=true infra2 Ready infra 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=registry-host,kubernetes.io/hostname=infra2,logging-infra-fluentd=true,node-role.kubernetes.io/infra=true infra3 Ready infra 4d v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=registry-host,kubernetes.io/hostname=infra3,logging-infra-fluentd=true,node-role.kubernetes.io/infra=true- On each node
- Add the following lines to /etc/sysctl.d/99-OCS.conf
- net.ipv4.tcp_max_syn_backlog=2048
- net.core.somaxconn = 2048 - Run the command
- sysctl -p
- Add the following lines to /etc/sysctl.d/99-OCS.conf
Add init container to gluster daemonset
- oc edit ds glusterfs
- insert the following under the spec.spec.
initContainers:
- name: delay-init
image: busybox
command: ['sh', '-c', 'sleep 300']
- Delete the pods one after other waiting for the previous one to come back and heals to complete in-between.
Changing the limit in Heketi
- Edit each of the heketi dc to have 2000 as limit for file volumes
- oc edit dc heketi-registry
- Add env HEKETI_GLUSTER_MAX_VOLUMES_PER_CLUSTER=2000 to the config.
Example
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
glusterfs: heketi-registry-pod
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
glusterfs: heketi-registry-pod
heketi: registry-pod
name: heketi-registry
spec:
containers:
- env:
- name: HEKETI_GLUSTER_MAX_VOLUMES_PER_CLUSTER
value: “2000”
Product(s)
Category
Components
Article Type