Migrate a CephFS PVC to a CephRBD (CephBlockPool) PVC - OpenShift Data Foundation (ODF)

Solution Verified - Updated

Environment

Red Hat OpenShift Container Storage (RHOCS) 4.x
Red Hat OpenShift Data Foundations (RHODF) 4.x

Issue

In some instances, it may be necessary to migrate PVCs using the cephfs storageclass to ceph-rbd. There are certain scenarios where this could be true, but the highlighted scenario for this solution will be OCS / ODF Database Workloads Must Not Use CephFS PVs/PVCs (RDBMSs, NoSQL, PostgreSQL, Mongo DBs, etc.).

This particular issue is not specific to Ceph. Database workloads should be stored on block storage as shown in the OpenShift Container Platform Docuemtation as well:

"Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage," not a shared filesystem.

OCP v4.18 - Other specific application storage recommendations

Resolution

WARNING: It is recommended to take a form of backup of the PVC prior to executing this (OADP, snapshot, etc.).

Note: The MariaDB application will be used for example purposes. Substitute mariadb deployment name with the application to match your environment. Additionally, practicing this process in a sandbox/test cluster before execution may prove to be useful as well.

  1. Gather information about the current CephFS volume mount:
$ oc set volume deployment -n <namespace> mariadb
 mariadb
  pvc/mariadb-storage (allocated 50GiB) as mariadb-storage
    mounted at /var/lib/mysql <------------------------------------- VOLUME MOUNT/DATA LOCATION
  1. Scale down the application associated with the CephFS PVC.
$ oc scale deployment -n <namespace> mariadb --replicas=0
  1. Create a migration application that will be used to attach and copy data between the old and new volumes.
$ oc create deployment  -n <namespace> migrate --image=registry.access.redhat.com/rhel7/rhel-tools -- tail -f /dev/null
  1. Mount the old volume (volume to be migrated), to the new migrate deployment:
$ oc set volume deployment/migrate -n <namespace> --add -t pvc --name=mariadb-old --claim-name=mariadb-storage --mount-path=/mariadb_old
  1. After the migrate pod becomes Running again from step 4, create a new ceph-rbd PVC and attach it to the migrate deployment:
$ oc set volume deployment/migrate -n <namespace> --add -t pvc --claim-class ocs-storagecluster-ceph-rbd --name=mariadb-rbd-storage --claim-name=mariadb-rbd-storage --mount-path=/mariadb_new --claim-mode=ReadWriteOnce --claim-size=50Gi
  1. After the migrate pod becomes Running again from step 5, rsh into the new migrate pod:
$ oc rsh  -n <namespace> migrate-<pod-name>
  1. rsync the data between the old and new volumes:
$ rsync -avxHAX --progress /mariadb_old/* /mariadb_new
  1. When the data has transferred, exit the pod and scale down the migrate deployment:
$ exit
$ oc scale deployment -n <namespace>  migrate --replicas=0
  1. Allow the migrate pod to terminate and release the volumes, then edit the mariadb deployment, and update the claimName with the new PVC name.
$ oc edit deployment -n <namespace> mariadb
<omitted-for-space>
      volumes:
      - name: mariadb-storage
        persistentVolumeClaim:
          claimName: mariadb-storage <--- Modify this to the new PVC name e.g. mariadb-rbd-storage
  1. Scale up the mariadb deployment:
$ oc scale deployment -n <namespace> mariadb --replicas=1
SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.