Spreading workload on specific storage nodes in OpenShift Container Storage - Developer Preview

Updated

Starting with OpenShift Container Storage 4.8, you can partition different I/O workloads onto specific physical storage nodes. For example, you can segregate nodes with different physical properties such as different vendors, storage device types, and so on. This method is defined as a “recipe” and is a combination of node-labeling and deviceClass attribute. To segregate the workloads, use the command-line interface and edit the custom resource definitions (CRDs) as there is no UI support.

Note: Developer preview releases are meant for customers who are willing to evaluate new products or releases of products in an early stage of product development. In this case, OCS 4.8 does not support spreading workload on specific storage nodes in the production environment.

This is a live document to be used in various environments and configurations. If you find any mistakes or missing instructions or need any assistance, please add an Content from github.com is not included.Issue or write us @ ocs-devpreview@redhat.com.

Partitioning the workload


This procedure shows how to partition a six storage nodes cluster into two groups of three nodes. In this procedure, the storage nodes are named as `worker1` through `worker6` and the subset of nodes and their physical storage devices are labeled as `set1` and `set2`.
  1. Split physical nodes into two groups using labels.

    a. Label the first subset of nodes as set1.

     $ oc label nodes worker1 cluster.ocs.openshift.io/openshift-storage-device-class=set1
     $ oc label nodes worker2 cluster.ocs.openshift.io/openshift-storage-device-class=set1
     $ oc label nodes worker3 cluster.ocs.openshift.io/openshift-storage-device-class=set1
    

    b. Label the second subset of nodes as set2.

     $ oc label nodes worker4 cluster.ocs.openshift.io/openshift-storage-device-class=set2
     $ oc label nodes worker5 cluster.ocs.openshift.io/openshift-storage-device-class=set2
     $ oc label nodes worker6 cluster.ocs.openshift.io/openshift-storage-device-class=set2
    
  2. In the StorageCluster CRD, define a different storageDeviceSet for each subset of nodes using the new deviceClass attribute and bind to specific nodes using the placement.nodeAffinity attribute.

      storageDeviceSets:
     - name: set1
      [...]
      deviceClass: "set1"
      [...]    
      placement:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "cluster.ocs.openshift.io/openshift-storage-device-class"
                operator: In
                values:
                - "set1"
    - name: set2
      [...]
      deviceClass: "set2"
      [...]
      placement:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "cluster.ocs.openshift.io/openshift-storage-device-class"
                operator: In
                values:
                - "set2"
    
  3. Define pools over specific deviceClass and create a StorageClass using these pools.

    apiVersion: ceph.rook.io/v1
    kind: CephBlockPool
    Metadata:
      name: set1-pool
      namespace: openshift-storage
    Spec:
      deviceClass: set1
      Parameters:
    [...]
    
SBR
Category
Article Type