Install Red Hat OpenShift Data Foundation (previously known as OpenShift Container Storage) 4.X in internal-attached mode using command line interface.

Updated

Overview

Red Hat OpenShift Data Foundation can be deployed in 3 modes: Internal, Internal-Attached, and External Mode. In the internal modes(internal, internal-attached), deployment of Red Hat OpenShift Data Foundation is entirely within the Red Hat OpenShift Container Platform and has all the benefits of operator-based deployment and management. The internal mode approach can be used to deploy Red Hat OpenShift Data Foundation using dynamic device provisioning whereas the internal-attached mode can be used to deploy Red Hat OpenShift Data Foundation using the local storage operator and local storage devices. In the external mode, Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes.

This article documents how we can deploy OpenShift Data Foundation in Internal-Attached mode. If you are looking for steps to deploy OpenShift Data Foundation in Internal mode, refer to Article. If you are looking for steps to deploy OpenShift Data Foundation in External mode, refer to Article.

OpenShift Container Platform has been verified to work in conjunction with This page is not included, but the link has been rewritten to point to the nearest parent document.localstorage devices and OpenShift Data Foundation on AWS EC2, VMware, Azure, and bare-metal hosts.

These instructions require that you have the ability to install both OpenShift Container Storage and the Local Storage Operator.

Step 1: Installing the Local Storage Operator

  • Create theopenshift-local-storage namespace.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-local-storage
    spec: {}
    EOF
    
  • Create the openshift-local-storage for Local Storage Operator.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: local-operator-group
      namespace: openshift-local-storage
    spec:
      targetNamespaces:
      - openshift-local-storage
    EOF
    
  • Subscribe to the local-storage-operator.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: local-storage-operator
      namespace: openshift-local-storage
    spec:
      channel: "4.9"  # <-- Channel should be used corresponding to the OCP version being used.
      installPlanApproval: Automatic
      name: local-storage-operator
      source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
      sourceNamespace: openshift-marketplace
    EOF
    

    Important: Appropriate channel should be used. Use the channel same as the OpenShift Container Platform version. For example, with OpenShift Container Platform v4.9, Local Storage Operator should be installed using channel 4.9.

Step 2: Preparing Nodes

You will need to add the ocs label to each OpenShift Container Platform node that has storage devices. The OpenShift Data Foundation operator looks for this label to know which nodes can be scheduling targets for OpenShift Data Foundation components. Later we will configure Local Storage Operator Custom Resources to create PVs from storage devices on nodes with this label. You must have a minimum of three labeled nodes with the same number of devices or disks and similar performance capability. Only SSDs or NVMe devices can be used for OpenShift Data Foundation.

  • To label the nodes use the following command:

       # oc label node <NodeName> cluster.ocs.openshift.io/openshift-storage=''
    

    CAUTION: Make sure to label OpenShift Container Platform nodes in 3 different AWS availability zones if OpenShift Data Foundation installation is on AWS. For most other infrastructures (VMware, bare-metal, etc.) rack labels are added to create 3 different zones (rack0, rack1, rack2) when the StorageCluster is created.

Only for OpenShift Container Storage v4.4 or v4.5

Manual Method to create Persistent Volumes

  • You need to know the device names on the nodes labeled for OpenShift Data Foundation. You can access the nodes using oc debug node and issuing the lsblk command after chroot.

          $ oc debug node/<node_name>
    
          # chroot /host
          # lsblk
          NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
          nvme0n1                      259:0    0   120G  0 disk
          |-nvme0n1p1                  259:1    0   384M  0 part /boot
          |-nvme0n1p2                  259:2    0   127M  0 part /boot/efi
          |-nvme0n1p3                  259:3    0     1M  0 part
          `-nvme0n1p4                  259:4    0 119.5G  0 part
            `-coreos-luks-root-nocrypt 253:0    0 119.5G  0 dm   /sysroot
          nvme1n1                      259:5    0  1000G  0 disk
          nvme2n1                      259:6    0  1000G  0 disk
    
  • After you know which local devices are available, in this case nvme0n1 and nvme1n1. You can now find the by-id, a unique name created using the hardware serial number for each device.

       # ls -l /dev/disk/by-id/
       total 0
       lrwxrwxrwx. 1 root root 10 Mar 17 16:24 dm-name-coreos-luks-root-nocrypt -> ../../dm-0
       lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC -> ../../nvme0n1
       lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS60382E5D7441494EC -> ../../nvme1n1
       lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-nvme.1d0f-4157533130333832453544373434313439344543-416d617a6f6e20454332204e564d6520496e7374616e63652053746f72616765-00000001 -> 
    ../../nvme0n1
       lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-nvme.1d0f-4157533630333832453544373434313439344543-      
    416d617a6f6e20454332204e564d6520496e7374616e63652053746f72616765-00000001 -> ../../nvme1n1
    
  • Article that has utility for gathering /dev/disk/by-id for all OpenShift Container Platform nodes with OpenShift Data Fundation label (cluster.ocs.openshift.io/openshift-storage).

  • Create the LocalVolume resource using the by-id for each OpenShift Container Platform node with the OpenShift Data Foundation label. In this case there is one device per node and the by-id is added manually under devicePaths: in the localvolume.yaml file.

       apiVersion: local.storage.openshift.io/v1
       kind: LocalVolume
       metadata:
         name: local-block
         namespace: openshift-local-storage
       spec:
         nodeSelector:
           nodeSelectorTerms:
           - matchExpressions:
               - key: cluster.ocs.openshift.io/openshift-storage
                 operator: In
                 values:
                 - ""
         storageClassDevices:
           - storageClassName: localblock
             volumeMode: Block
             devicePaths:
               - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC   # <-- modify this line
               - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS1F45C01D7E84FE3E9   # <-- modify this line
               - /dev/disk/by-id/nvme-Amazon_EC2_NVMe_Instance_Storage_AWS136BC945B4ECB9AE4   # <-- modify this line
    
       # oc create -f block-storage.yaml
    
  • After the localvolume resource is created ensure that Available PVs are created for each device with a by-id in the localvolume.yaml file. It can take a few minutes until all disks appear as PVs while the Local Storage Operator is preparing the disks.

  • Jump to `Step 3: Installing OpenShift Data Foundation

Only for OpenShift Data Foundation and OpenShift Container Storage v4.6 and above

Auto Discovering Devices and creating Persistent Volumes

This is the method available starting with OpenShift Container Storage v4.6 and Local Storage Operator v4.6.

  • Local Storage Operator v4.6 and above supports discovery of devices on OpenShift Container Platform nodes with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage="". Create the LocalVolumeDiscovery resource using this file after the OpenShift Container Platform nodes are labeled with the OpenShift Container Storage label.

    cat <<EOF | oc apply -f -
    apiVersion: local.storage.openshift.io/v1alpha1
    kind: LocalVolumeDiscovery
    metadata:
      name: auto-discover-devices
      namespace: openshift-local-storage
    spec:
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
            - key: cluster.ocs.openshift.io/openshift-storage
              operator: In
              values:
                - ""
    EOF
    
  • After this resource is created you should see a new localvolumediscovery resource and there is a localvolumediscoveryresults for each OpenShift Container Platform node labeled with the OpenShift Data Foundation label. Each localvolumediscoveryresults will have the detail for each disk on the node including the by-id, size and type of disk.

  • You can check the localvolumediscovery resource and localvolumediscoveryresults by running the command given below:

       $ oc get localvolumediscoveries -n openshift-local-storage
       
       Output
    
       NAME                    AGE
       auto-discover-devices   5m15s
    
       $ oc get localvolumediscoveryresults -n openshift-local-storage
       
       Output
    
       NAME                           AGE
       discovery-result-compute-0     19m
       discovery-result-compute-1     19m
       discovery-result-compute-2     19m
    
Create LocalVolumeSet

The disk used must be SSDs or NVMe disks and must be raw block devices. This is due to the fact that the operator creates distinct partitions on the provided raw block devices for the OSD metadata and OSD data.

  • Use thelocalvolumeset.yamlfile to create the LocalVolumeSet. Configure the parameters with comments to meet the needs of your environment. If not required, the parameters with comments can be deleted.

    apiVersion: local.storage.openshift.io/v1alpha1
    kind: LocalVolumeSet
    metadata:
      name: local-block
      namespace: openshift-local-storage
    spec:
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: cluster.ocs.openshift.io/openshift-storage
                operator: In
                values:
                  - ""
      storageClassName: localblock
      volumeMode: Block
      fstype: ext4
      maxDeviceCount: 1  # <-- Maximum number of devices per node to be used
      deviceInclusionSpec:
        deviceTypes:
        - disk
        - part   # <-- Remove this if not using partitions
        deviceMechanicalProperties:
        - NonRotational
        #minSize: 0Ti   # <-- Uncomment and modify to limit the minimum size of disk used
        #maxSize: 0Ti   # <-- Uncomment and modify to limit the maximum size of disk used
    
       # oc create -f localvolumeset.yaml
    
  • After the localvolumesets resource is created check that Available PVs are created for each disk on OpenShift Container Platform nodes with the OpenShift Container Storage label. It can take a few minutes until all disks appear as PVs while the Local Storage Operator is preparing the disks.

    Check for diskmaker-manager pods

       # oc get pods -n openshift-local-storage | grep "diskmaker-manager"
    
       Output
    
       diskmaker-manager-8l2bq                   2/2     Running   0          3m42s
       diskmaker-manager-bsklr                   2/2     Running   0          3m42s
       diskmaker-manager-fzbnx                   2/2     Running   0          3m42s
    

    Check for PV's created

       $ oc get pv -n openshift-local-storage
    
       Output
    
       NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
       local-pv-1f003b14   2328Gi     RWO            Delete           Available           localblock              11s
       local-pv-4d7de45    2328Gi     RWO            Delete           Available           localblock              11s
       local-pv-77dbe0a6   2328Gi     RWO            Delete           Available           localblock              10s
    

Step 3: Installing OpenShift Data Foundation

Install Operator

  • Create the openshift-storage namespace.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
      name: openshift-storage
    spec: {}
    EOF
    
  • Create the openshift-storage-operatorgroup for Operator.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage
    EOF
    
  • Subscribe to the ocs-operator.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ocs-operator
      namespace: openshift-storage
    spec:
      channel: "stable-4.8"  # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
      installPlanApproval: Automatic
      name: ocs-operator
      source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
      sourceNamespace: openshift-marketplace
    EOF
    
  • Subscribe to the odf-operator for version 4.9 or above

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: odf-operator
      namespace: openshift-storage
    spec:
      channel: "stable-4.9" # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
      installPlanApproval: Automatic
      name: odf-operator
      source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
      sourceNamespace: openshift-marketplace
    EOF
    

Create Cluster. (This is an example for ODF Baremetal deployment based on LSO with flexible scaling enabled)

  • Below is the sample output of storagecluster.yaml

    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      name: ocs-storagecluster
      namespace: openshift-storage
    spec:
      manageNodes: false
      flexibleScaling: true
      resources:
        mds:
          limits:
            cpu: "3"
            memory: "8Gi"
          requests:
            cpu: "3"
            memory: "8Gi"
      monDataDirHostPath: /var/lib/rook
      storageDeviceSets:
      - count: 3  # <-- Modify count to desired value. Count here means the no. of osd across the cluster.
        dataPVCTemplate:
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: "100Gi"  # <-- This should be changed as per storage size. Minimum 100 GiB and Maximum 4 TiB
            storageClassName: localblock
            volumeMode: Block
        name: ocs-deviceset
        placement: {}
        portable: false
        replica: 1      # <-- Here the count will always be 1 as flexible scaling is enabled.
        resources:
          limits:
            cpu: "2"
            memory: "5Gi"
          requests:
            cpu: "2"
            memory: "5Gi"
    

    ODF v4.12 and later support Single Stack IPv6. If you plan to use IPv6 in your deployment, add the following to the storagecluster.yaml:

     spec:
      network:
        ipFamily: "IPv6"
    

Optional:
Starting with ODF 4.20, you can add the following spec to the StorageCluster CR to ensure clean redeployment. This configuration wipes any existing Ceph BlueStore metadata from OSD disks before they are reused, preventing conflicts from previous deployments.

spec:
  managedResources:
    cephCluster: 
      cleanupPolicy:
        wipeDevicesFromOtherClusters: true 
   # oc create -f storagecluster.yaml

Step 4: Verifying the Installation

  • Verify if all the pods are up and running

       # oc get pods -n openshift-storage
    

    All the pods in the openshift-storage namespace must be in either Running or Completed state.
    Cluster creation might take around 5 mins to complete. Please keep monitoring until you see the expected state or you see an error or you find progress stuck even after waiting for a longer period.

  • List CSV to see that ocs-operator is in Succeeded phase

    $ oc get csv -n openshift-storage
    NAME                  DISPLAY                       VERSION   REPLACES   PHASE
    ocs-operator.v4.8.0   OpenShift Container Storage   4.8.0                Succeeded
    
    $ oc get csv -n openshift-storage
    NAME                  DISPLAY                       VERSION   REPLACES   PHASE
    mcg-operator.v4.9.0   NooBaa Operator               4.9.0                Succeeded
    ocs-operator.v4.9.0   OpenShift Container Storage   4.9.0                Succeeded
    odf-operator.v4.9.0   OpenShift Data Foundation     4.9.0                Succeeded
    

Step 5: Creating test CephRBD PVC and CephFS PVC.

cat <<EOF | oc apply -f -
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: rbd-pvc
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: ocs-storagecluster-ceph-rbd
EOF
cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: ocs-storagecluster-cephfs
EOF
  • Validate that the new PVCs are created.
   # oc get pvc | grep rbd-pvc
   # oc get pvc | grep cephfs-pvc
SBR
Category
Components
Article Type