Install Red Hat OpenShift Data Foundation(previously known as OpenShift Container Storage) 4.X in internal mode using command line interface
Overview
Red Hat OpenShift Data Foundation(ODF) can be deployed in 3 modes: Internal, Internal-Attached, and External Mode. In the internal modes(internal, internal-attached), deployment of Red Hat OpenShift Data Foundation is entirely within the Red Hat OpenShift Container Platform and has all the benefits of operator-based deployment and management. The internal mode approach can be used to deploy Red Hat OpenShift Data Foundation using dynamic device provisioning whereas the internal-attached mode can be used to deploy Red Hat OpenShift Data Foundation using the local storage operator and local storage devices. In the external mode, Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes.
This article documents how we can deploy OpenShift Data Foundation in Internal Mode. If you are looking for steps to deploy OpenShift Data Foundation in Internal-Attached mode, refer to Article. If you are looking for steps to deploy OpenShift Data Foundation in External mode, refer to Article.
Preparing Nodes
You will need to add the OpenShift Data Foundation label to each OpenShift Container Platform node that has storage devices. The OpenShift Data Foundation operator looks for this label to know which nodes can be scheduling targets for OpenShift Data Foundation components. You must have a minimum of three labeled nodes with the same number of devices or disks and similar performance capability. Only SSDs or NVMe devices can be used for OpenShift Container Storage.
To label the nodes use the following command:
# oc label node <NodeName> cluster.ocs.openshift.io/openshift-storage=''
CAUTION: Make sure to label OpenShift Container Platform nodes in 3 different AWS availability zones if OpenShift Data Foundation installation is on AWS. For most other infrastructures (VMware, bare metal, etc.) rack labels are added to create 3 different zones (rack0, rack1, rack2) when the StorageCluster is created.
Installing OpenShift Data Foundation
Use the following instructions to install the generally available (GA) OpenShift Data Foundation version.
Install Operator
- Create the
openshift-storagenamespace.
cat <<EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
name: openshift-storage
spec: {}
EOF
- Create the
openshift-storage-operatorgroupfor the Operator.
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
EOF
- Subscribe to the
ocs-operatorfor version 4.8 or lower
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ocs-operator
namespace: openshift-storage
spec:
channel: "stable-4.8" # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
installPlanApproval: Automatic
name: ocs-operator
source: redhat-operators # <-- Modify the name of the redhat-operators catalogsource if not default
sourceNamespace: openshift-marketplace
EOF
- Subscribe to the
odf-operatorfor version 4.9 or above
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: odf-operator
namespace: openshift-storage
spec:
channel: "stable-4.9" # <-- Channel should be modified depending on the OCS version to be installed. Please ensure to maintain compatibility with OCP version
installPlanApproval: Automatic
name: odf-operator
source: redhat-operators # <-- Modify the name of the redhat-operators catalogsource if not default
sourceNamespace: openshift-marketplace
EOF
Create Storage Cluster
- Storage Cluster CR. For each set of 3 OSDs increment the
count. Below is the sample output ofstoragecluster.yaml
---
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
manageNodes: false
monPVCTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2 # <-- This should be changed as per platform
volumeMode: Filesystem
storageDeviceSets:
-
count: 1
dataPVCTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Ti # <--- Supported sizes are 0.5, 2, 4 Ti
storageClassName: gp2 # <-- This should be changed as per the platform
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources: {}
ODF v4.12 and higher versions support Single Stack IPv6. If you plan to use IPv6 in your deployment, add the following to the storagecluster.yaml:
spec:
network:
ipFamily: "IPv6"
In ODF v4.13 and higher versions, if you want to disable Multi-Cloud Object Gateway external service, add the following to the storagecluster.yaml:
spec:
multiCloudGateway:
disableLoadBalancerService: true <--------------- Set it to true to disable the MCG external service
# oc create -f storagecluster.yaml
Verifying the Installation
- Verify if all the pods are up and running
# oc get pods -n openshift-storage
All the pods in the openshift-storage namespace must be in either Running or Completed state.
Cluster creation might take around 5 mins to complete. Please keep monitoring until you see the expected state or you see an error or you find progress stuck even after waiting for a longer period.
- List CSV to see that operator is in Succeeded phase
$ oc get csv -n openshift-storage
NAME DISPLAY VERSION REPLACES PHASE
ocs-operator.v4.6.0 OpenShift Container Storage 4.6.0 Succeeded
$ oc get csv -n openshift-storage
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.9.0 NooBaa Operator 4.9.0 Succeeded
ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded
odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Succeeded
Creating test CephRBD PVC and CephFS PVC
cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
- Validate that the new PVCs are created.
# oc get pvc | grep rbd-pvc
# oc get pvc | grep cephfs-pvc