Configuring the Rook-Ceph Toolbox in OpenShift Data Foundation 4.x

Updated

Important: As per our Troubleshooting guide Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss.

The rook-ceph toolbox is a pod with common tools used for debugging, testing, and troubleshooting a Ceph cluster.

ODF v4.15 and above To enable the toolbox pod, patch/edit the StorageCluster CR like below:

oc patch storageclusters.ocs.openshift.io ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'

ODF v4.14 and below

Option 1: Using the patch command:

oc patch OCSInitialization ocsinit -n openshift-storage --type json --patch  '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'

Option 2: Editing the OCSInitialization ocsinit directly

oc edit OCSInitialization ocsinit

Update the spec section to include the following making sure you properly indent:

spec:
  enableCephTools: true

Note: Toggling the value from true to false will terminate any running toolbox pod immediately.

Verify if toolbox pod is up and running

oc -n openshift-storage get pod -l "app=rook-ceph-tools"

Once the rook-ceph toolbox pod is running, you can rsh to it directly by running:

oc -n openshift-storage rsh $(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)

One can now run Ceph commands:

     sh-4.4$ ceph -s
       cluster:
         id:     e4f12xxx-Redacted-Cluster-ID-yyy8fdcdfe6f
         health: HEALTH_OK
 
       services:
         mon: 3 daemons, quorum a,b,c (age 69m)
         mgr: a(active, since 68m)
         mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-b=up:active} 1 up:standby-replay
         osd: 3 osds: 3 up (since 67m), 3 in (since 67m)
 
       data:
         pools:   3 pools, 24 pgs
         objects: 96 objects, 99 MiB
         usage:   3.1 GiB used, 3.0 TiB / 3.0 TiB avail
         pgs:     24 active+clean
 
       io:
         client:   853 B/s rd, 4.3 KiB/s wr, 1 op/s rd, 0 op/s wr
SBR
Category
Article Type