After a ceph-upgrade the cluster status reports 'Legacy BlueStore stats reporting detected'
Environment
- Red Hat Ceph Storage 4.0
- Red Hat Ceph Storage 4.2
Issue
-
After the upgrade from RHCS 3.X to RHCS 4.0 ,
ceph -sshows the following health warning :# ceph -s health: HEALTH_WARN Legacy BlueStore stats reporting detected on X OSD(s)
Resolution
Set bluestore_fsck_quick_fix_on_mount to true:
# ceph config set osd bluestore_fsck_quick_fix_on_mount true
Note: To prevent data movement while OSDs are down flags noout and norebalance can be set as follows:
# ceph osd set noout
# ceph osd set norebalance
-
For non-containerized environment , please execute the following command on individual OSD nodes of the cluster :
Restart ceph-osd.target:
# systemctl restart ceph-osd.target -
For containerized environment, please restart the individual OSDs one by one :
Wait for all PGs to be active+clean
# ceph -s ... pgs: ZZZZ active+cleanRepeat the above steps for all remaining OSD nodes
Note: When all OSDs are repaired, unset flags if set previously:
# ceph osd unset noout
# ceph osd unset norebalance
Please set bluestore_fsck_quick_fix_on_mount to false when all the OSDs are repaired:
# ceph config set osd bluestore_fsck_quick_fix_on_mount false
An additional method that can be used in a non-containerized deployment is:
# systemctl stop ceph-osd.target
# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-$OSDID repair
# systemctl start ceph-osd.target
Root Cause
- This is a known issue and we have a This content is not included.Bugzilla raised to have this documented.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.