Gluster block PVs are not updated with new IPs after gluster node replacement.
Environment
- Red Hat Openshift Container Platform 3.11
Issue
- After replacing the gluster nodes for CNS, the existing gluster block-based PV's are not updated with the replaced node IP's.
Resolution
- There's a known This content is not included.bug on how a gluster block-based PV needs to be updated with the IP after replacing the gluster nodes, which is under progress and yet to provide us with a firm solution.
- However, the mentioned workaround can be tried which should work in most of the cases.
- Before performing the workaround steps, it is expected that the node replacement and further maintenance are carried out properly.
- The latest errata provides some commands which play a vital role in the workaround solution. Hence make sure all the packages are updated.
Workaround
After successful node replacement, perform the below steps if not done already.
- Patch gluster block volume endpoints in heketi pod
$ heketi-cli volume endpoint patch <VOLUMEID>
- Patch cluster EndPoints
$ oc patch ep heketi-db-storage-endpoints -p '{"subsets": [{"addresses":[{"ip":"XX.XX.XX.XX"}],"ports":[{"port":1}]},{"addresses":[{"ip":"XX.XX.XX.XX"}],"ports":[{"port":1}]},{"addresses":[{"ip":"XX.XX.XX.XX"}],"ports":[{"port":1}]}]}'
- Verify that the endpoint now shows new IPs.
$ heketi-cli blockvolume info <blokc-volume-id>
Once the above checks are done; perform the following
- Scale down the pod that is using the gluster-block pv
$ oc scale dc/<dcname> --replicas=0
- Patch the PV
$ oc patch pv <PV-Name> -p '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
- Take a backup of both PV and respective PVC yamls .
$ oc get pv <pvname> -oyaml &> /tmp/<pvname>.yaml
$ oc get pvc<pvcname> -oyaml &> /tmp/<pvcname>.yaml
- Delete both the pv and associated pvc.
$ oc delete pv <pvname>
$ oc delete pvc <pvcname>
- Edit the Backed PVC yaml
$ vi pvcname.yaml //Remove "uid", "creationTimestamp", "resourceVersion" from the backed up files manually
OR
$ sed -i ‘/uid/d’ /tmp/<pvcname>.yaml && sed -i ‘/creationTimestamp/d’ /tmp/<pvcname>.yaml && sed -i ‘/resourceVersion/d’ /tmp/<pvcname>.yaml
- Edit the Backed PV yaml by replacing the old IP with new in addition to the below step
$ vi pv.yaml //Remove "uid", "creationTimestamp", "resourceVersion" from the backed up files manually
OR
$ sed -i ‘/uid/d’ /tmp/<pvname>.yaml && sed -i ‘/creationTimestamp/d’ /tmp/<pvname>.yaml && sed -i ‘/resourceVersion/d’ /tmp/<pvname>.yaml
and manually remove claimRef section from the PV
---snip---
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvcblock
namespace: glusttest
resourceVersion: "1200861"
uid: 806fb087-7d9c-11ea-9df7-001a4a000126
---snip--
- Create PV and PVC from the above modified backuped file. Make sure the PV is created first so that the intended PVC is bounded.
$ oc create -f <pvname>.yaml
$ oc create -f <pvcname>.yaml
- Scale up the pod
$ oc scale dc/<dcname> --replicas=1
Diagnostic Steps
- After performing gluster node replacement , all the existing gluster-block PV's will have the old node's IP address.
- Compare the same by following below steps:
$ oc get pod -n <gluster-project> -owide //Check for IP address of the gluster pods
$ oc describe pv <gluster-block-pvname> //Check for portal IP addresses in any existing gluster block PV
SBR
Product(s)
Components
Category
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.