How to run ceph-bluestore-tool (CBT) in an OCS 4.X / ODF environment.

Updated

Steps for versions 4.9 and below differ from 4.10. Instructions for both versions are listed below.

OCS 4.9 and below

  • Scale down the rook-ceph-operator and ocs-operator deployments

    $ oc scale deployment rook-ceph-operator ocs-operator --replicas=0 -n openshift-storage
    
  • Get and save the OSD deployment yaml, on which the CBT is needed to be ran

    $ oc get deployment rook-ceph-osd-0 -oyaml > rook-ceph-osd-0-deployment.yaml
      -- Choose the OSD in question, `osd.0` is an example here
    
  • Patch the OSD deployment to replace the OSD pod command to sleep and to remove the pods livenessProbe

    $ oc patch deployment/rook-ceph-osd-0 -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "osd", "command": ["sleep", "infinity"], "args": []}]}}}}'
    $ oc patch deployment/rook-ceph-osd-0  -n openshift-storage --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]'
    
  • Confirm that a new OSD pod is up and running:

    $ oc get po | grep osd
    rook-ceph-osd-0-59b78f7d-26ngh                                    2/2     Running     0          66s
    
  • Exec into the OSD pod and execute the CBT command:

    $ oc rsh rook-ceph-osd-0-59b78f7d-26ngh
    

    Create a temporary directory inside the pod:

    sh-4.4# mkdir /var/log/ceph/bluefs 
    sh-4.4# ceph-bluestore-tool --out-dir /var/log/ceph/bluefs --path /var/lib/ceph/osd/ceph-0/ bluefs-export
    sh-4.4# du -sh /var/log/ceph/bluefs 
    650M	bluefs/
    
  • From another terminal on bastion host, execute the following commands to copy the generated data:

    $ cd ~
    $ mkdir bluefs
    $ oc cp rook-ceph-osd-0-59b78f7d-26ngh:/var/log/ceph/bluefs bluefs/
    $ du -sh bluefs/
    650M	bluefs/
    
  • Replace the OSD yaml to one saved in earlier step

    $ oc replace -f rook-ceph-osd-0-deployment.yaml  --force
    
  • Scale only the ocs-operator deployment, it will also bring up the rook-ceph deployment

    $ oc scale deployment ocs-operator --replicas=1 -n openshift-storage
    

OCS 4.10

  • Scale down the rook-ceph-operator and ocs-operator deployments

    $ oc scale deployment rook-ceph-operator ocs-operator --replicas=0 -n openshift-storage
    
  • Set your variables

    $ starttime=$(date +%F_%H-%M-%S)
    $ osdid=0
    
  • Get and save the OSD deployment yaml, on which the CBT is needed to be ran

    $ oc get deployment rook-ceph-osd-${osdid} -o yaml > ${osdid}.${starttime}.yaml
    
  • Remove the startupProbe and livenessProbe

    $ oc set probe deployment rook-ceph-osd-${osdid} --remove --liveness --startup
       output: deployment.apps/rook-ceph-osd-0 probes updated
    
  • Wait for new pod to deploy

    $ oc get pods -w |grep osd
    
  • Patch the OSD deployment to replace the OSD pod command to sleep

    $ oc patch deployment rook-ceph-osd-${osdid} -n openshift-storage -p '{"spec": {"template": {"spec": {"containers": [{"name": "osd", "command": ["sleep"], "args": ["infinity"]}]}}}}'
    
  • Wait for new pod to deploy

    $ oc get pods -w |grep osd
    
  • Exec into the OSD pod and execute the CBT command:

    $ oc rsh rook-ceph-osd-0-59b78f7d-26ngh
    

    Create a temporary directory inside the pod:

    sh-4.4# mkdir /var/log/ceph/bluefs 
    sh-4.4# ceph-bluestore-tool --out-dir /var/log/ceph/bluefs --path /var/lib/ceph/osd/ceph-0/ bluefs-export
    sh-4.4# du -sh /var/log/ceph/bluefs 
    650M	bluefs/
    
  • From another terminal on bastion host, execute the following commands to copy the generated data:

    $ cd ~
    $ mkdir bluefs
    $ oc cp rook-ceph-osd-0-59b78f7d-26ngh:/var/log/ceph/bluefs bluefs/
    $ du -sh bluefs/
    650M	bluefs/
    
  • Replace the OSD yaml to one saved in earlier step

    $ oc replace -f rook-ceph-osd-0-deployment.yaml  --force
    
  • Scale only the ocs-operator deployment, it will also bring up the rook-ceph deployment

    $ oc scale deployment ocs-operator --replicas=1 -n openshift-storage
    
SBR
Category
Article Type