ODF-4.14 | OpenShift Data Founation provisioned NFS/PersistentVolume sharing between Namespaces - Developer Preview
Important: A developer preview feature is subject to Developer preview support limitations. Developer preview features are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. Development Preview features are meant for customers who are willing to evaluate new products or releases of products in an early stage of product development. If you need assistance with developer preview features, reach out to the ocs-devpreview@redhat.com mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. To know more about the support scope refer to the This content is not included.KCS.
Goal
When OpenShift Data Foundation is used to dynamically create an NFS-export, the PersistentVolumeClaim is used to access the NFS-export in a Pod. It is not immediately possible to use the same NFS-export for a different application in another OpenShift Namespace. This article describes the steps that are required to create a second PersistentVolume that can be Bound to a second PersistentVolumeClaim in another OpenShift Namespace.
Limitation and Restrictions
OpenShift is not aware that a dynamically provisioned NFS-export is used for more than one PersistentVolume(Claim). When the PersistentVolumeClaim is deleted in the Namespace where it was created, by default the NFS-export will be removed as well. In case the NFS-export is shared between Namespaces, other applications using the (now) deleted NFS-export will not work as intended anymore.
By following the steps in this article, the RetainPolicy is modified for the original PersistentVolume, so that deleting the PersistentVolumeClaim does not trigger a deletion of the NFS-export. Accounting of NFS-export usage is required by the administrator that configured the NFS-export for sharing between multiple Namespaces.
Sharing a dynamically provisioned NFS-export across multiple Namespaces
With OpenShift Data Foundation 4.13, the support for dynamic provisioning of NFS-exports is enabled by default. Earlier versions require manual steps to enable the NFS components. Detailed instructions to manually enable the NFS components can be found in chapter 13 of the Managing and allocating storage resources guide.
Create a dynamically provisioned NFS-export
- Create a PersistentVolumeClaim (
app-data) using the StorageClass (ocs-storagecluster-ceph-nfs) that is provided by OpenShift Data Foundation:
$ oc -n my-first-app create -f- << EOY
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-nfs
EOY
Protect the PersistentVolume from accidental deletion
- Identify the PersistentVolume (
pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e) by inspecting the PersistentVolumeClaim (app-data):
$ oc -n my-first-app get pvc/app-data
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
app-data Bound pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e 1Gi RWX ocs-storagecluster-ceph-nfs 93s
Modify the PersistentVolume (pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e) with the oc patch command to set the persistentVolumeReclaimPolicy to Retain:
$ oc patch -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' pv/pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e
Now the persistentVolumeReclaimPolicy has been set to Retain, deletion of the PersistentVolume (pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e) will prevent deletion of the NFS-export when the PersistentVolumeClaim (app-data) is deleted.
Copy the PersistentVolume
A PersistentVolume can only be Bound to a single PersistentVolumeClaim. The existing NFS-export that is providing the PersistentVolume (pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e) can be used by more than one client, or multiple PersistentVolumes. In order to provide access to the NFS-export in a different Namespace, there needs to be a PersistentVolume (cluster scoped) for a PersistentVolumeClaim (Namespace scoped) to Bind against.
- Save the state of the PersistentVolume (
pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e) in a file (app-data-pv.yaml), so that it can be used to create a new PersistentVolume that refers to the existing NFS-export:
$ oc get --show-managed-fields=false -oyaml pv/pvc-b83f3b81-93d4-461c-8e50-fb45dc902c9e > app-data-pv.yaml
- Edit the
app-data-pv.yamlfile, change thenamein themetadataand update the details underclaimRef. Then remove the remaining parts of themetadata, theclaimRefandstatussections so that the following remains:
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-data-for-2nd-app
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: shared-app-data
namespace: my-2nd-app
csi:
controllerExpandSecretRef:
name: rook-csi-cephfs-provisioner
namespace: openshift-storage
driver: openshift-storage.nfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-node
namespace: openshift-storage
volumeAttributes:
backingSnapshot: "false"
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
nfsCluster: ocs-storagecluster-cephnfs
server: ocs-storagecluster-cephnfs-service
share: /0001-0011-openshift-storage-0000000000000001-345d3692-d7e7-47f6-9ce3-a32178bdafc1
storage.kubernetes.io/csiProvisionerIdentity: 1687433484920-8081-openshift-storage.nfs.csi.ceph.com
subvolumeName: nfs-export-345d3692-d7e7-47f6-9ce3-a32178bdafc1
subvolumePath: /volumes/csi/nfs-export-345d3692-d7e7-47f6-9ce3-a32178bdafc1/6398a8a8-d976-4484-977a-4bc2e7db7486
volumeNamePrefix: nfs-export-
volumeHandle: 0001-0011-openshift-storage-0000000000000001-345d3692-d7e7-47f6-9ce3-a32178bdafc1
persistentVolumeReclaimPolicy: Retain
storageClassName: ocs-storagecluster-ceph-nfs
volumeMode: Filesystem
The example above contains the Namespace (my-2nd-app) and the name of the PersistentVolumeClaim (shared-app-data) where this new PersistentVolume is expected to Bind against.
After editing, create the new PersistentVolume (app-data-for-2nd-app):
$ oc create -f app-data-pv.yaml
persistentvolume/app-data-for-2nd-app created
$ oc get pv/app-data-for-2nd-app
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-data-for-2nd-app 1Gi RWX Retain Available my-2nd-app/shared-app-data ocs-storagecluster-ceph-nfs 11s
Create a PersistentVolumeClaim in an other Namespace
The new PersistentVolumeClaim (shared-app-data) uses static provisioning to Bind with the just created PersistentVolume (app-data-for-2nd-app):
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-app-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: app-data-for-2nd-app
Creating the PersistentVolumeClaim (shared-app-data) in the Namespace (my-2nd-app) should result in a Bound state, against the PersistentVolume (app-data-for-2nd-app):
$ oc -n my-2nd-app create -f my-2nd-app-pvc.yaml
persistentvolumeclaim/shared-app-data created
$ oc get pvc/shared-app-data
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
shared-app-data Bound app-data-for-2nd-app 1Gi RWX gp3-csi 10s
Note that the StorageClass of the PVC is listed as gp3-csi in the above output. This is the default StorageClass that gets filled-in, but it is not used in any functional way anywhere.