RBD and CephFS Erasure Coding in Internal Mode - Developer preview OpenShift Data Foundation 4.20

Updated

Important: A developer preview feature is subject to Developer preview support limitations. Developer preview features are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. Development Preview features are meant for customers who are willing to evaluate new products or releases of products in an early stage of product development. If you need assistance with developer preview features, reach out to the ocs-devpreview@redhat.com mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. To know more about the support scope refer to the This content is not included.KCS

Erasure coding deployment for RBD in OpenShift Data Foundation using command-line interface

Procedure

  1. Decide the values for dataChunks (k) and codingChunks (m).
    The supported values are:
    i) k=8 m=3
    ii) k=8 m=4
    iii) k=4 m=2
    For more information, see Red Hat Ceph Storage: Supported configurations

The minimum number of worker nodes required is the sum of dataChunks and codingChunks (k+m).

  • Erasure coding is most often expected to be used with the “host” failure domain, where the number of worker nodes would need to be at least k+m. If deploying in a zone or rack failure domain, the number of zones or racks, respectively, would require at least k+m, rather than the number of worker nodes. If deploying with zones or racks, wherever this document refers to failureDomain: host, replace with failureDomain: zone or failureDomain: rack, respectively.
    Create a CephBlockPool CR for the data pool with the following spec:
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: ec-data-pool
  namespace: openshift-storage
spec:
  failureDomain: host
  deviceClass: ssd
  # Make sure you have enough nodes to support the erasure code chunks.
  # At least k+m nodes with OSDs are required
  erasureCoded:
    dataChunks: 4    # replace your k value
    codingChunks: 2  # replace your m value
  1. Create a CephBlockPool CR for the metadata pool with the following spec (note that the metadata pool requires replication rather than EC):
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicated-metadata-pool
  namespace: openshift-storage
spec:
  failureDomain: host
  deviceClass: ssd
  replicated:
    size: 3
  1. Create a storage class for provisioning volumes from the new EC pool:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: odf-block-ec
provisioner: openshift-storage.rbd.csi.ceph.com
parameters:
  clusterID: openshift-storage
  
  dataPool: ec-data-pool         # Replace with the name of the data pool
  pool: replicated-metadata-pool # Replace with the name of the metadata pool

  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/controller-publish-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-publish-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete
  1. Create a PVC requesting a volume from the new storage class.

    After writing some data to the volume, confirm the data is stored in the new data pool by using the ceph df command in the toolbox.

Enabling Erasure Coding for Shared Filesystem

  1. Decide the values for dataChunks (k) and codingChunks (m).
    The supported values are:
    i) k=8 m=3
    ii) k=8 m=4
    iii) k=4 m=2
    Minimum number of worker nodes* required will be the sum of dataChunks and codingChunks (k+m)
    For more information, see Red Hat Ceph Storage: Supported configurations

  2. Update the StorageCluster CR to add the data pool to the “additionalDataPools” of the “cephFil.esystems”:

$ oc edit storagecluster -n openshift-storage


spec:
  managedResources:
    cephFilesystems:
      additionalDataPools:
        - name: erasurecoded
          failureDomain: host
          deviceClass: ssd
          erasureCoded:
            dataChunks: 4   # replace your k value
            codingChunks: 2 # replace your m value
  1. Create a storage class for provisioning volumes from the new EC pool in the filesystem:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: odf-cephfs-ec
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
  clusterID: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
  pool: ocs-storagecluster-cephfilesystem-erasurecoded


  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/controller-publish-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-publish-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
  1. Create a PVC requesting a volume from the new storage class.

    After writing some data to the volume, confirm the data is stored in the new data pool by using the ceph df command in the toolbox.

SBR
Category
Article Type