Support for RGW Erasure Coding in Internal Mode - Developer preview OpenShift Data Foundation 4.16

Updated

Support erasure coding deployment for RGW in OpenShift Data Foundation using CLI

Procedure

  1. Decide the values for dataChunks (k) and codingChunks (m).
    The supported values are:
    i) k=8 m=3
    ii) k=8 m=4
    iii) k=4 m=2
    Minimum number of worker nodes required will be the sum of dataChunks and codingChunks (k+m)
    For more information, see Red Hat Ceph Storage: Supported configurations

  2. Create a new Cephobjectstore with datapool using ec spec by using the following object-ec.yaml:

# Object-ec.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
 name: ocs-storagecluster-cephobjectstore-ec
 namespace: openshift-storage
spec:
 metadataPool:
   failureDomain: host
   replicated:
     size: 3
     requireSafeReplicaSize: true
 dataPool:
   failureDomain: host
   erasureCoded:
     dataChunks: 4 # update the value of k determined in step 1 
     codingChunks: 2  # update the value of m determined in step 1
 preservePoolsOnDelete: true
 gateway:
   port: 80
   instances: 1

Note: Metadata pool supports only replicated pool.

  1. Create the new RGW storageclass.
rgw-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-retain-bucket
provisioner: openshift-storage.ceph.rook.io/bucket
reclaimPolicy: Retain
parameters:
  objectStoreName: ocs-storagecluster-cephobjectstore-ec
  objectStoreNamespace: openshift-storage
  1. Validate the cluster with newly created resources
    i) Get the cephobjectstore.

         ```
         $ oc get cephobjectstore -nopenshift-storage 
         NAME                                                  PHASE
         ocs-storagecluster-cephobjectstore-ec                 Ready
         ```
    

    ii) check the erasure code RGW pools.

         ```
         $ Ceph osd pool ls
         ocs-storagecluster-cephobjectstore-ec.rgw.log
         ocs-storagecluster-cephobjectstore-ec.rgw.buckets.non-ec
         ocs-storagecluster-cephobjectstore-ec.rgw.otp
         ocs-storagecluster-cephobjectstore-ec.rgw.control
         ocs-storagecluster-cephobjectstore-ec.rgw.buckets.index
         ocs-storagecluster-cephobjectstore-ecrgw.meta
         ocs-storagecluster-cephobjectstore-ec.rgw.buckets.data
         ```
    

    Detail view of ocs-storagecluster-cephobjectstore-ec.rgw.buckets.data

         ```
                 pool 19 'ocs-storagecluster-cephobjectstore-ec.rgw.buckets.data' erasure profile ocs-storagecluster- 
         cephobjectstore-ec.rgw.buckets.data_ecprofile size 6 min_size 5 crush_rule 36 object_hash rjenkins 
         pg_num 32 pgp_num 32 autoscale_mode on last_change 172 lfor 0/0/165 flags hashpspool,ec_overwrites 
         stripe_width 16384 compression_mode none application rook-ceph-rgw
         ```
    

    iii) Check the ec rgw crush_rule .

         ```
         ocs-storagecluster-cephobjectstore-ec.rgw.buckets.data
         {
                 "rule_id": 36,
                 "rule_name": "ocs-storagecluster-cephobjectstore-ec.rgw.buckets.data",
                 "type": 3,
                 "steps": [
                      {
                         "op": "set_chooseleaf_tries",
                         "num": 5
                    },
                    {
                         "op": "set_choose_tries",
                         "num": 100
                    },
                     {
                         "op": "take",
                         "item": -1,
                         "item_name": "default"
                      },
                     {
                         "op": "chooseleaf_indep",
                         "num": 0,
                         "type": "host"
                     },
                     {
                         "op": "emit"
                     }
                 ]
             }
         ```
    

    iv) Get the new storageclass.

         ```
         $ oc get sc -nopenshift-storage
         rook-ceph-retain-bucket       openshift-storage.ceph.rook.io/bucket   Retain          Immediate              
         false                  101m
         ```
    
  2. Consume the Storage.

    • Create the objectBucketClaim
# Rgw-obc.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
 name: ceph-retain-bucket
spec:
 generateBucketName: ceph-bkt
 storageClassName: rook-ceph-retain-bucket
$ oc get objectbucketclaim  -nopenshift-storage
NAME                 STORAGE-CLASS             PHASE   AGE
ceph-retain-bucket   rook-ceph-retain-bucket   Bound   79m

This creates the ObjectBucket too.

$ kubectl get objectbucket   -nopenshift-storage
NAME                                       STORAGE-CLASS             CLAIM-NAMESPACE   CLAIM-NAME   RECLAIM-POLICY   PHASE   AGE
obc-openshift-storage-ceph-retain-bucket   rook-ceph-retain-bucket                                  Retain           Bound   52m

After this is completed, you can use this bucket with your application pod.

Verifying by adding data


You can use the guidelines provided in [Consume the Object Storage](https://rook.io/docs/rook/latest-release/Storage-Configuration/Object-Storage-RGW/object-storage/#consume-the-object-storage).

In this example, k=4 and m=2 for dataPool configuration.

  1. Store a 5 GB file in it and notice that only 8 Gb is needed to store it.
    Before storing 5 GB file
sh-4.4$ s5cmd --endpoint-url http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore-ec.openshift-storage.svc:80 du s3://ceph-bkt-38dff5f2-a26c-46fc-9141-da5c94cd3d6f
33 bytes in 3 objects: s3://ceph-bkt-38dff5f2-a26c-46fc-9141-da5c94cd3d6f
sh-4.4$ ceph osd df tree
ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME                               
 -1         0.58612         -  600 GiB   10 GiB   10 GiB   0 B  424 MiB  590 GiB  1.75  1.00    -          root default                            
 -8         0.19537         -  200 GiB  3.5 GiB  3.4 GiB   0 B  140 MiB  197 GiB  1.75  1.00    -              rack rack0                          
 -7         0.09769         -  100 GiB  2.1 GiB  2.0 GiB   0 B   97 MiB   98 GiB  2.13  1.22    -                  host ocs-deviceset-1-data-0m2v4c
  1    ssd  0.09769   1.00000  100 GiB  2.1 GiB  2.0 GiB   0 B   97 MiB   98 GiB  2.13  1.22  133      up              osd.1                       
-19         0.09769         -  100 GiB  1.4 GiB  1.3 GiB   0 B   43 MiB   99 GiB  1.36  0.78    -                  host ocs-deviceset-1-data-1m2xbc
  5    ssd  0.09769   1.00000  100 GiB  1.4 GiB  1.3 GiB   0 B   43 MiB   99 GiB  1.36  0.78  145      up              osd.5                       
 -4         0.19537         -  200 GiB  3.5 GiB  3.4 GiB   0 B  146 MiB  196 GiB  1.75  1.00    -              rack rack1                          
-15         0.09769         -  100 GiB  2.1 GiB  2.0 GiB   0 B   54 MiB   98 GiB  2.06  1.18    -                  host ocs-deviceset-0-data-1658wk
  4    ssd  0.09769   1.00000  100 GiB  2.1 GiB  2.0 GiB   0 B   54 MiB   98 GiB  2.06  1.18  142      up              osd.4                       
 -3         0.09769         -  100 GiB  1.4 GiB  1.4 GiB   0 B   93 MiB   99 GiB  1.44  0.82    -                  host ocs-deviceset-2-data-0v6x24
  0    ssd  0.09769   1.00000  100 GiB  1.4 GiB  1.4 GiB   0 B   93 MiB   99 GiB  1.44  0.82  142      up              osd.0                       
-12         0.19537         -  200 GiB  3.5 GiB  3.4 GiB   0 B  137 MiB  197 GiB  1.75  1.00    -              rack rack2                          
-11         0.09769         -  100 GiB  1.3 GiB  1.3 GiB   0 B   78 MiB   99 GiB  1.33  0.76    -                  host ocs-deviceset-0-data-0tfv2m
  2    ssd  0.09769   1.00000  100 GiB  1.3 GiB  1.3 GiB   0 B   78 MiB   99 GiB  1.33  0.76  144      up              osd.2                       
-17         0.09769         -  100 GiB  2.2 GiB  2.1 GiB   0 B   59 MiB   98 GiB  2.17  1.24    -                  host ocs-deviceset-2-data-1cwfjp
  3    ssd  0.09769   1.00000  100 GiB  2.2 GiB  2.1 GiB   0 B   59 MiB   98 GiB  2.17  1.24  137      up              osd.3                       
                        TOTAL  600 GiB   10 GiB   10 GiB   0 B  424 MiB  590 GiB  1.75                                                             
MIN/MAX VAR: 0.76/1.24  STDDEV: 0.37

After storing a 5GB file,

sh-4.4$ s5cmd --endpoint-url http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore-ec.openshift-storage.svc:80 du s3://ceph-bkt-38dff5f2-a26c-46fc-9141-da5c94cd3d6f
5368709153 bytes in 4 objects: s3://ceph-bkt-38dff5f2-a26c-46fc-9141-da5c94cd3d6f
sh-4.4$ 
sh-4.4$ 
sh-4.4$ ceph osd df tree
ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META     AVAIL    %USE  VAR   PGS  STATUS  TYPE NAME                               
 -1         0.58612         -  600 GiB   18 GiB   18 GiB   0 B  460 MiB  582 GiB  3.03  1.00    -          root default                            
 -8         0.19537         -  200 GiB  6.0 GiB  5.9 GiB   0 B  147 MiB  194 GiB  3.02  1.00    -              rack rack0                          
 -7         0.09769         -  100 GiB  3.4 GiB  3.3 GiB   0 B  101 MiB   97 GiB  3.41  1.13    -                  host ocs-deviceset-1-data-0m2v4c
  1    ssd  0.09769   1.00000  100 GiB  3.4 GiB  3.3 GiB   0 B  101 MiB   97 GiB  3.41  1.13  133      up              osd.1                       
-19         0.09769         -  100 GiB  2.6 GiB  2.6 GiB   0 B   46 MiB   97 GiB  2.63  0.87    -                  host ocs-deviceset-1-data-1m2xbc
  5    ssd  0.09769   1.00000  100 GiB  2.6 GiB  2.6 GiB   0 B   46 MiB   97 GiB  2.63  0.87  145      up              osd.5                       
 -4         0.19537         -  200 GiB  6.1 GiB  5.9 GiB   0 B  154 MiB  194 GiB  3.03  1.00    -              rack rack1                          
-15         0.09769         -  100 GiB  3.3 GiB  3.3 GiB   0 B   58 MiB   97 GiB  3.34  1.10    -                  host ocs-deviceset-0-data-1658wk
  4    ssd  0.09769   1.00000  100 GiB  3.3 GiB  3.3 GiB   0 B   58 MiB   97 GiB  3.34  1.10  142      up              osd.4                       
 -3         0.09769         -  100 GiB  2.7 GiB  2.6 GiB   0 B   96 MiB   97 GiB  2.71  0.90    -                  host ocs-deviceset-2-data-0v6x24
  0    ssd  0.09769   1.00000  100 GiB  2.7 GiB  2.6 GiB   0 B   96 MiB   97 GiB  2.71  0.90  142      up              osd.0                       
-12         0.19537         -  200 GiB  6.1 GiB  5.9 GiB   0 B  160 MiB  194 GiB  3.03  1.00    -              rack rack2                          
-11         0.09769         -  100 GiB  2.6 GiB  2.5 GiB   0 B   97 MiB   97 GiB  2.62  0.87    -                  host ocs-deviceset-0-data-0tfv2m
  2    ssd  0.09769   1.00000  100 GiB  2.6 GiB  2.5 GiB   0 B   97 MiB   97 GiB  2.62  0.87  144      up              osd.2                       
-17         0.09769         -  100 GiB  3.4 GiB  3.4 GiB   0 B   63 MiB   97 GiB  3.44  1.14    -                  host ocs-deviceset-2-data-1cwfjp
  3    ssd  0.09769   1.00000  100 GiB  3.4 GiB  3.4 GiB   0 B   63 MiB   97 GiB  3.44  1.14  137      up              osd.3                       
                        TOTAL  600 GiB   18 GiB   18 GiB   0 B  460 MiB  582 GiB  3.03                                                             
MIN/MAX VAR: 0.87/1.14  STDDEV: 0.37

Deletion of cephObjectstore

For more information, see Content from rook.io is not included.Deleting a CephObjectStore

If the objectbucket has data, you need to remove it

  1. Delete objects in bucket
s5cmd --endpoint-url  $url rm $objectbucket/$object
  1. Delete bucket.
s5cmd --endpoint-url  $url rb $objectbucket
  1. Delete the ObjecBucketClaim
$kubectl delete objectbucketclaim ceph-retain-bucket -nopenshift-storage
objectbucketclaim.objectbucket.io "ceph-retain-bucket" deleted

This also removes the k8s objectbucket
4. Remove the cephobjectstore

oc delete cephobjectstore ocs-storagecluster-cephobjectstore-ec -nopenshift-storage
cephobjectstore.ceph.rook.io "ocs-storagecluster-cephobjectstore-ec" deleted
  1. Delete the RGW ec storageclass created if you intend to remove it.
Article Type