Capturing Data via CLI Terminal When The OpenShift Data Foundation Must-Gather Fails
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4
- Red Hat OpenShift Data Foundations (RHODF)
- 4
- Red Hat OpenShift Container Storage (RHOCS)
- 4
Issue
In some instances we’ve seen issues similar to the following:
- ODF Must-Gather Fails (Usually Disconnected Environments w/Incorrect Image).
- ODF Must-Gather Succeeds, but Fails to Capture Data and defaults to inspect.local.
- Cluster is Not Healthy Enough for A must-gather and we’re attempting to capture what we can.
Resolution
- Make Directories for tarball:
$ mkdir -p odf-logs/pod-logs odf-logs/ceph-logs odf-logs/odf-outputs
- Navigate into the
pod-logsdirectory and run the following command:
$ cd odf-logs/pod-logs
$ for mypod in $(oc -n openshift-storage get pods|grep -v NAME|awk '{ print $1 }');do oc -n openshift-storage logs $mypod > $mypod.log;done
$ oc adm inspect namespace/openshift-storage
$ cd ..
- Navigate into the odf-outputs directory and run the following commands:
Copy/Paste All oc Commands in Terminal:
$ cd odf-outputs/
oc get nodes > nodes.out
oc get co > co.out
oc get clusterversion > clusterversion.out
oc get mcp > mcp.out
oc logs -l k8s-app=machine-config-controller -c machine-config-controller -n openshift-machine-config-operator > mcp-operator-logs.out
oc get noobaa -n openshift-storage > noobaa.yaml
oc get backingstore -n openshift-storage > backingstore.out
oc get backingstore -n openshift-storage -o yaml > backingstore.yaml
oc get pvc -n openshift-storage db-noobaa-db-pg-0 -o yaml > noobaa-db-pvc.out
oc get storagecluster -n openshift-storage -o yaml > storagecluster.out
oc get cephcluster -n openshift-storage -o yaml > cephcluster.out
oc get csv -n openshift-storage > csv.out
oc get csv -n openshift-storage -o yaml > csv.yaml
oc get installplan -n openshift-storage > installplan.out
oc get installplan -n openshift-storage -o yaml > installplan.yaml
oc get subs -n openshift-storage > subs.out
oc get subs -n openshift-storage -o yaml > subs.yaml
oc get olm -n openshift-storage > olm.out
oc get pod -o wide -n openshift-storage > pods-wide.out
oc get pod -n openshift-storage -o yaml > pods.yaml
oc get deployment -n openshift-storage > deployment.out
oc get deployment -n openshift-storage -o yaml > deployment.yaml
oc get pvc -n openshift-storage > odf-pvcs.out
oc get pv > pv-all.out
oc get pv -o yaml > pv-all.yaml
oc get pod -n openshift-storage -o 'custom-columns=NAME:.metadata.labels.ceph-osd-id,PVCNAME:.spec.volumes[*].persistentVolumeClaim' | grep -v none > osd-pvcs.out
oc get pv | awk 'NR>1 {print $1}' | while read it; do oc describe pv ${it}; echo " "; done > pv.out
oc get sc > sc.out
oc get sc -o yaml > sc.yaml
oc get nodes | awk 'NR>1 {print $1}' | while read it; do oc describe node ${it}; echo " "; done > node.out > nodes-desc.out
oc get jobs -n openshift-storage > jobs.out
oc get events --sort-by='.lastTimestamp' -n openshift-storage > events-sorted.out
oc get obc -A > obc.out
oc get obc -A -o yaml > obc.yaml
oc get ob > ob.out
oc get ob -o yaml > ob.yaml
oc get volumeattachment > VA.out
for pod in `oc get pods -n openshift-storage -o name -l app=openshift-storage.rbd.csi.ceph.com-nodeplugin -l app=csi-rbdplugin`; do echo $pod >> rbd-maps.out ; oc exec -it -n openshift-storage $pod -c csi-rbdplugin -- rbd device list >> rbd-maps.out ; done
$ cd ..
-
Create the rook-ceph-tools pod Configuring the Rook-Ceph Toolbox in OpenShift Data Foundation 4.x
-
Navigate into the ceph-logs directory and run the following commands:
Copy/Paste All oc Commands in Terminal:
$ cd ceph-logs/
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph status > ceph-status.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph health detail > health-detail.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph healthcheck history ls > health-history.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph osd df tree > osd-tree.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph osd dump > osd-dump.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph osd pool autoscale-status > autoscale-status.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph df detail > df-detail.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph crash ls > crash-ls.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph osd perf > osd-perf.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph time-sync-status > time-sync.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph mds stat > mds-stat.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph config dump > config-dump.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph versions > version.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph tell mds.ocs-storagecluster-cephfilesystem:0 session ls > active-mds-session-ls.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph pg dump > pg-dump.out
oc exec -n openshift-storage deployment/rook-ceph-tools -- ceph report > report.out
$ cd ../..
- Navigate into the parent directory above odf-logs and tarball:
$ tar czvf odf-logs.tar.gz odf-logs
- Upload odf-logs.tar.gz to the support case
SBR
Product(s)
Components
Category
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.