How to provision local volumes only for infra nodes using the local-storage-operator
Environment
- OpenShift Container Platform
- 4.x (>=4.2)
Issue
- How can I create the
local volumesusing thelocal-storage-operatorbut only to certain group of machines without having to harcode the hostnames or using node tolerations as stated by the official This page is not included, but the link has been rewritten to point to the nearest parent document.doc?
Resolution
- As a prerequisite, you should have some nodes already marked with
infra(or other) label, you can achieve this creating newmachineconfigpoolsas documented here or just cloning workermachinesetsand adding the proper labelnode-role.kubernetes.io/infra, for example:
$ oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
pamoedom-dfp87-infra-eu-west-3a 1 1 1 1 3h31m
pamoedom-dfp87-worker-eu-west-3a 1 1 1 1 3d23h
pamoedom-dfp87-worker-eu-west-3b 1 1 1 1 3d23h
pamoedom-dfp87-worker-eu-west-3c 1 1 1 1 3d23h
$ oc get machinesets pamoedom-dfp87-infra-eu-west-3a -o yaml -n openshift-machine-api
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
creationTimestamp: "2020-04-14T10:45:35Z"
generation: 5
labels:
machine.openshift.io/cluster-api-cluster: pamoedom-dfp87
name: pamoedom-dfp87-infra-eu-west-3a
namespace: openshift-machine-api
resourceVersion: "1560270"
selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/pamoedom-dfp87-infra-eu-west-3a
uid: 4491e126-3583-49e5-8006-18420b33ae59
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: pamoedom-dfp87
machine.openshift.io/cluster-api-machineset: pamoedom-dfp87-infra-eu-west-3a
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: pamoedom-dfp87
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: pamoedom-dfp87-infra-eu-west-3a
spec:
metadata:
labels:
node-role.kubernetes.io/infra: ""
providerSpec:
value:
ami:
id: ami-0658bcfda04098635
apiVersion: awsproviderconfig.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 200
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: pamoedom-dfp87-worker-profile
instanceType: m5.xlarge
kind: AWSMachineProviderConfig
metadata:
creationTimestamp: null
placement:
availabilityZone: eu-west-3a
region: eu-west-3
publicIp: null
securityGroups:
- filters:
- name: tag:Name
values:
- pamoedom-dfp87-worker-sg
subnet:
filters:
- name: tag:Name
values:
- pamoedom-dfp87-private-eu-west-3a
tags:
- name: kubernetes.io/cluster/pamoedom-dfp87
value: owned
userDataSecret:
name: worker-user-data
status:
availableReplicas: 1
fullyLabeledReplicas: 1
observedGeneration: 5
readyReplicas: 1
replicas: 1
- Once the
inframachines are ready and have the proper local disk attached, confirm the device name, for example:
$ oc get nodes | grep infra
ip-10-0-128-240.eu-west-3.compute.internal Ready infra,worker 18m v1.16.2
$ oc debug node/ip-10-0-128-240.eu-west-3.compute.internal -- bash -c "fdisk -l"
Starting pod/ip-10-0-128-240eu-west-3computeinternal-debug ...
To use host binaries, run `chroot /host`
[...]
Disk /dev/nvme1n1: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
- Install the
local-storage-operatoras documented here and create the properLocal Volumewith the following specs to match only theinfranodes:
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: local-disks
namespace: local-storage
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/infra
operator: Exists
storageClassDevices:
- devicePaths:
- /dev/nvme1n1
fsType: ext4
storageClassName: local-sc
volumeMode: Filesystem
NOTE: Remember to change the devicePaths list and the volumeMode if necessary.
- Confirm that local-disks-local-diskmaker and local-disks-local-diskprovisioner
podsare placed only on the expectednodes:
$ oc get pods -o wide -n local-storage
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
local-disks-local-diskmaker-7zx56 1/1 Running 0 17m 10.131.2.5 ip-10-0-128-240.eu-west-3.compute.internal <none> <none>
local-disks-local-provisioner-5hjf4 1/1 Running 0 17m 10.131.2.4 ip-10-0-128-240.eu-west-3.compute.internal <none> <none>
local-storage-operator-dcc4cbfb8-fk4nc 1/1 Running 0 29m 10.128.2.21 ip-10-0-167-164.eu-west-3.compute.internal <none> <none>
- (Optional) confirm the creation of the
PersistentVolumeand theStorageClass:
$ oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-87daeedc 2Gi RWO Delete Available local-sc 18m
$ oc get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 3d23h
local-sc kubernetes.io/no-provisioner 163m
Root Cause
- The
local-storage-operatorThis page is not included, but the link has been rewritten to point to the nearest parent document.documentation only contemplates the possibility of hardcoding the node hostnames or using tolerations in order to properly provision thelocal volumes, however it could be necessary to select thenodesby a label instead.
SBR
Product(s)
Components
Category
Tags
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.