RHOSP 17-beta known issue with RGW
Issue
The Red Hat OpenStack Platform (RHOSP) 17 beta uses Red Hat Ceph Storage (Ceph) 5.2 beta. This build has the following known issue: This content is not included.Bug 2111902.
Director deploys Ceph successfully with the Block Storage service and the File Storage but the Object Storage service will not be functional even though the deployment succeeds. The workaround is to redeploy the Ceph RGW daemon instances , using Ceph orchestrator, with an older container image not affected by the bug.
Resolution
This workaround configures the RHOSP Object Storage service to use the Ceph RGW functionality of the latest Ceph 5.1 (pre 5.2 beta) container image.
From the undercloud, fetch the container image that is working for RGW.
[stack@undercloud-0 ~]$ source stackrc
(undercloud) [stack@undercloud-0 ~]$ sudo podman pull registry.redhat.io/rhceph/rhceph-5-rhel8:5-235
Push the image in the undercloud container image registry.
[stack@undercloud-0 ~]$ source stackrc
(undercloud) [stack@undercloud-0 ~]$ sudo openstack tripleo container image push registry.redhat.io/rhceph/rhceph-5-rhel8:5-235
Verify the image is available in the undercloud container image registry and can be pulled from the overcloud nodes:
[stack@undercloud-0 ~]$ source stackrc
(undercloud) [stack@undercloud-0 ~]$ sudo openstack tripleo container image list | grep rhceph
| docker://undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-5-rhel8:5-235
From the undercloud identify an overcloud node where the Ceph manager service is running. By default this will be the first controller node.
[stack@undercloud-0 ~]$ source stackrc
(undercloud) [stack@undercloud-0 ~]$ metalsmith list
+--------------------------------------+--------------+--------------------------------------+---------------+--------+------------------------+
| UUID | Node Name | Allocation UUID | Hostname | State | IP Addresses |
+--------------------------------------+--------------+--------------------------------------+---------------+--------+------------------------+
| 6af7fced-de3e-4f4e-b90b-8bb23e92b334 | ceph-0 | dfa35962-01e4-406f-aa9a-0713e51bd374 | cephstorage-2 | ACTIVE | ctlplane=192.168.24.45 |
| 0e288d70-a658-400a-a113-6d75aee7c23f | ceph-1 | 89c2d590-7d26-477a-85c3-0991a4cb58c5 | cephstorage-0 | ACTIVE | ctlplane=192.168.24.7 |
| 71751163-c6af-4148-a9ee-f96b12e4d4e5 | ceph-2 | ef29a680-ebc7-4e62-8259-848e996d99f9 | cephstorage-1 | ACTIVE | ctlplane=192.168.24.47 |
| c433e476-e2b2-497a-9db3-c6caca712d81 | compute-0 | 8a490018-5a77-4ea9-b7cf-5d72c1653aef | compute-1 | ACTIVE | ctlplane=192.168.24.25 |
| 43fd215f-315e-47ed-a6b4-e151a757405f | compute-1 | ae02f939-662a-47b0-9b0f-eb511ee6bfd3 | compute-0 | ACTIVE | ctlplane=192.168.24.33 |
| 0721536f-8a17-44ba-93ec-901f90d3ddca | controller-0 | 48d3a9ad-c03b-45b1-82c2-1963c1552163 | controller-0 | ACTIVE | ctlplane=192.168.24.39 |
| e9958029-253a-4725-a985-e2b83a278f48 | controller-1 | 80ff2aee-d765-4803-8c40-d618e752b95e | controller-2 | ACTIVE | ctlplane=192.168.24.52 |
| 929737e6-f055-437e-818e-f706dab399bc | controller-2 | 7a2b463d-bdd6-4acc-913e-28fe4fe7d84a | controller-1 | ACTIVE | ctlplane=192.168.24.41 |
+--------------------------------------+--------------+--------------------------------------+---------------+--------+------------------------+
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin@192.168.24.39
Once you are connected to an overcloud node running the Ceph manager, use cephadm to start a shell.
[heat-admin@controller-0 ~]$ sudo cephadm shell
Inferring fsid 64d627dd-0bde-552a-ba07-ee8e611cdc9f
Using recent ceph image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph@sha256:06255c43a5ccaec516969637a39d500a0354da26127779b5ee53dbe9c444339c
[ceph: root@controller-0 /]#
Check the RGW instances and verify they are running.
[ceph: root@controller-0 /]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
crash 6/6 9m ago 11h *
mgr 3/3 9m ago 11h controller-0;controller-1;controller-2
mon 3/3 9m ago 11h controller-0;controller-1;controller-2
osd.default_drive_group 15 6m ago 11h ceph-0;ceph-1;ceph-2
rgw.rgw ?:8080 3/3 9m ago 9m controller-0;controller-1;controller-2
Redeploy the existing instances using the image pushed in the undercloud.
[ceph: root@controller-0 /]# ceph orch ps | awk '/rgw/ {print $1}' | xargs -n 1 -I {} ceph orch daemon redeploy {} --image undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-5-rhel8:5-235
Wait until the new RGW instances are running.
[ceph: root@controller-0 /]# ceph orch ps | grep rgw
rgw.rgw.controller-0.tryotw controller-0 172.17.3.123:8080 running (23m) 3m ago 2h 77.1M - 16.2.7-126.el8cp a4eb511aa22b c51072183106
rgw.rgw.controller-1.zoqqsn controller-1 172.17.3.118:8080 running (22m) 7m ago 2h 76.7M - 16.2.7-126.el8cp a4eb511aa22b 637cc443eafc
rgw.rgw.controller-2.frjmvm controller-2 172.17.3.34:8080 running (22m) 7m ago 2h 81.0M - 16.2.7-126.el8cp a4eb511aa22b 9e316357ff89
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.