Install Red Hat OpenShift Data Foundation (previously known as OpenShift Container Storage) 4.X in External Mode using command line interface.

Updated

Overview

Red Hat OpenShift Data Foundation (ODF) can be deployed in 3 modes: Internal, Internal-Attached, and External Mode. In the internal modes(internal, internal-attached), deployment of Red Hat OpenShift Data Foundation is entirely within the Red Hat OpenShift Container Platform and has all the benefits of operator based deployment and management. The internal mode approach can be used to deploy Red Hat OpenShift Data Foundation using dynamic device provisioning whereas the internal-attached mode can be used to deploy Red Hat OpenShift Data Foundation using the local storage operator and local storage devices. In the external mode, Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes.

This article documents how we can deploy OpenShift Data Foundation in External Mode.

Preparing Nodes

You will need to add the OpenShift Data Foundation label to each OpenShift Container Platform node that has storage devices. The OpenShift Data Foundation operator looks for this label to know which nodes can be scheduling targets for OpenShift Data Foundation components. Later we will configure Local Storage Operator Custom Resources to create PVs from storage devices on nodes with this label.

  • To label the nodes use the following command:

    # oc label node <NodeName> cluster.ocs.openshift.io/openshift-storage=''
    

Installing OpenShift Data Foundation

Use the following instructions to install generally available (GA) OpenShift Data Foundation version.

Install Operator

  • Create the openshift-storage namespace.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
      name: openshift-storage
    spec: {}
    EOF
    
  • Create the openshift-storage-operatorgroup for OpenShift Data Foundation Operator.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage
    EOF
    
  • Subscribe to the ocs-operator for version 4.8 or lower

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ocs-operator
      namespace: openshift-storage
    spec:
      channel: "stable-4.7" # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
      installPlanApproval: Automatic
      name: ocs-operator
      source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
      sourceNamespace: openshift-marketplace
    EOF
    
  • Subscribe to the odf-operator for version 4.9 or above

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: odf-operator
      namespace: openshift-storage
    spec:
      channel: "stable-4.9" # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
      installPlanApproval: Automatic
      name: odf-operator
      source: redhat-operators  # <-- Modify the name of the redhat-operators catalogsource if not default
      sourceNamespace: openshift-marketplace
    EOF
    

Create Storage Cluster

  • Get the ceph-external-cluster-details-exporter.py script:

    # oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py
    
  • Login to one of the Ceph Monitor daemon nodes and run the above script to generate the JSON containing the External Ceph cluster details:

    # python ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
    

For more options please refer to Creating an OpenShift Data Foundation Cluster service for external mode

  • Create the secret rook-ceph-external-cluster-details containing the external cluster JSON output in base64 encoding

    cat << EOF | oc create -f -
    kind: Secret 
    apiVersion: v1
    metadata:
      name: rook-ceph-external-cluster-details
      namespace: openshift-storage
    data:
      external_cluster_details:
        W3sia2luZCI6ICJDb.....    # Replace this with the external cluster JSON output in `base64` encoding
    type: Opaque
    EOF
    
  • Create the Storage Cluster CR for the External Mode:

    cat << EOF | oc create -f -
    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      name: ocs-external-storagecluster
      namespace: openshift-storage
    spec:
      externalStorage:
        enable: true
      labelSelector: {}
    EOF
    
  • Enable the Red Hat OpenShift Data Foundation console plugin:

    $ oc patch console.operator cluster -n openshift-storage --type json -p '[{"op": "add", "path": "/spec/plugins", "value": ["odf-console"]}]'
    

Verifying the Installation

  • Verify if all the pods are up and running

    # oc get pods -n openshift-storage
    

    All the pods in the openshift-storage namespace must be in either Running or Completed state.
    Cluster creation might take around 5 mins to complete. Please keep monitoring until you see the expected state or you see an error or you find progress stuck even after waiting for a longer period.

  • List CSV to see that ocs-operator is in Succeeded phase

       $ oc get csv -n openshift-storage
       NAME                  DISPLAY                       VERSION   REPLACES   PHASE
       ocs-operator.v4.7.3   OpenShift Container Storage   4.7.3                Succeeded
    
       $ oc get csv -n openshift-storage
       NAME                  DISPLAY                       VERSION   REPLACES   PHASE
       mcg-operator.v4.9.0   NooBaa Operator               4.9.0                Succeeded
       ocs-operator.v4.9.0   OpenShift Container Storage   4.9.0                Succeeded
       odf-operator.v4.9.0   OpenShift Data Foundation     4.9.0                Succeeded
    

[OPTIONAL] Storage Class for OpenShift Virtualization

  • Create below Storage Class for OpenShift Virtualization workload:

    cat <<EOF | oc apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ocs-external-storagecluster-ceph-rbd-virtualization
      annotations:
        description: Provides RWO Filesystem volumes, and RWO and RWX Block volumes. Optimized for VM workloads
        reclaimspace.csiaddons.openshift.io/schedule: '@weekly'
        storageclass.kubevirt.io/is-default-virt-class: "true"
    parameters:
      clusterID: openshift-storage
      csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
      csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
      csi.storage.k8s.io/fstype: ext4
      csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
      csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
      csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
      csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
      imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff
      imageFormat: "2"
      pool: rbdpool
      mounter: rbd
      mapOptions: krbd:rxbounce
    provisioner: openshift-storage.rbd.csi.ceph.com
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    EOF
    

Creating test CephRBD PVC and CephFS PVC

  • Example for RBD PVC:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: rbd-pvc
    spec:
      accessModes:
       - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: ocs-external-storagecluster-ceph-rbd
    EOF
    
  • Example for CephFS PVC:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cephfs-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
      storageClassName: ocs-external-storagecluster-cephfs
    EOF
    
  • Validate that the new PVCs are created.

    # oc get pvc | grep rbd-pvc
    # oc get pvc | grep cephfs-pvc
    
SBR
Category
Components
Article Type