How to boot Openstack VMs/Instances from choice of Ceph pools ?
Environment
- Red Hat Enterprise Linux OpenStack Platform 6.0 (Juno)
- Red Hat Ceph Storage 1.2.3
Issue
- How to boot Openstack VMs/Instances from choice of Ceph pools ?
- What is the status of Multi-Backend support for nova instances ?
- Can Juno recognize direct RBD images from Ceph ?
- When the Hypervisor is running on rhel 7 (with the RBD device driver installed into the kernel), can we present RBD storage directly from Ceph to Nova and bypass using the current librbd -> libvirt? The use case is we want to be able to have VMs have the capability to boot from choice of Ceph pools. For example, applications that are latency sensitive and do manage their data resiliency would benefit if they could use a Ceph pool with less than 3 replicas; whilst the run of any other application that does not manage data resiliency would benefit from its VM booting from a Ceph pool of replica count 3.
Resolution
- librbd will be better for lower latency applications than the kernel driver since it can use the rbd cache which is only available in user space.
- Currently nova does not support multi-backend like cinder we have RFE for it : This content is not included.Multi-Backend support for nova instances
We need to use cinder multi-backend feature to accomplish this :
- Content from ceph.com is not included.Document link for rbd-openstack
- Content from docs.openstack.org is not included.Document link for configure multiple-storage back ends
- Create ceph pools and set caps for these pools for cinder user, run below given command any of the ceph node :
pool names : regular-replica3 and ssd-replica2
$ sudo ceph osd pool create regular-replica3 128 128
pool 'regular-replica3' created
$ sudo ceph osd pool create ssd-replica2 128 128
pool 'ssd-replica2' created
$ sudo rados -p regular-replica3 df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
regular-replica3 - 0 0 0 0 0 0 0 0 0
total used 32689476 270
total avail 155962044
total space 188651520
$ sudo rados -p ssd-replica2 df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
ssd-replica2 - 0 0 0 0 0 0 0 0 0
total used 32689476 270
total avail 155962044
total space 188651520
- Update Cephx Caps for both new pools
$ sudo ceph auth get client.cinder
exported keyring for client.cinder
[client.cinder]
key = AQDwYglVaKIqIBAAnh6BmYyhSQs5+3UwgcbbuQ==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images, allow rwx pool=volumes1"
$ sudo ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images, allow rwx pool=volumes1, allow rwx pool=regular-replica3, allow rwx pool=ssd-replica2'
updated caps for client.cinder
$ sudo ceph auth get client.cinder
exported keyring for client.cinder
[client.cinder]
key = AQDwYglVaKIqIBAAnh6BmYyhSQs5+3UwgcbbuQ==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images, allow rwx pool=volumes1, allow rwx pool=regular-replica3, allow rwx pool=ssd-replica2"
- Configure cinder with multi-backend ceph pools :
[root@test ~(keystone_admin)]# cinder type-create Regular-Volumes
+--------------------------------------+-----------------+
| ID | Name |
+--------------------------------------+-----------------+
| 820d8bd6-0210-4922-a2be-fb18926fb296 | Regular-Volumes |
+--------------------------------------+-----------------+
[root@test ~(keystone_admin)]#
[root@test ~(keystone_admin)]# cinder type-create SSD-Volumes
+--------------------------------------+-------------+
| ID | Name |
+--------------------------------------+-------------+
| d18f2862-3eef-4912-bf16-68781c4b5a20 | SSD-Volumes |
+--------------------------------------+-------------+
- Edit /etc/cinder/cinder.conf
enabled_backends=REGULAR-VOLUMES, SSD-VOLUMES
[Regular-Volumes]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = regular-replica3
volume_backend_name=REGULAR-VOLUMES
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 804b60e7-b397-49d3-8a39-abf03f6bdec4
[SSD-Volumes]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = ssd-replica2
volume_backend_name=SSD-VOLUMES
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 804b60e7-b397-49d3-8a39-abf03f6bdec4
[root@test ~(keystone_admin)]# cinder type-key Regular-Volumes set volume_backend_name=REGULAR-VOLUMES
[root@test ~(keystone_admin)]# cinder type-key SSD-Volumes set volume_backend_name=SSD-VOLUMES
[root@test ~(keystone_admin)]# cinder extra-specs-list
+--------------------------------------+-----------------+----------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-----------------+----------------------------------------------+
| 820d8bd6-0210-4922-a2be-fb18926fb296 | Regular-Volumes | {u'volume_backend_name': u'REGULAR-VOLUMES'} |
| d18f2862-3eef-4912-bf16-68781c4b5a20 | SSD-Volumes | {u'volume_backend_name': u'SSD-VOLUMES'} |
+--------------------------------------+-----------------+----------------------------------------------+
[root@test ~(keystone_admin)]#
[root@test ~(keystone_admin)]# service openstack-cinder-volume restart
Redirecting to /bin/systemctl restart openstack-cinder-volume.service
[root@test ~(keystone_admin)]#
- Now go to dashboard-> volumes and create a bootable volume you can use command line also.
Select Volume Source = Image
Use image as a source = Image name
Type(Volume Type) = Regular-Volumes
and then create.
For Example :
testregular 1GB Available Regular-Volumes nova Yes No
- From ceph node :
[ceph@test ~]$ sudo rados -p regular-replica3 df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
regular-replica3 - 0 0 0 0 0 0 0 0 0
total used 32684504 270
total avail 155967016
total space 188651520
[ceph@test ~]$ sudo rados -p regular-replica3 df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
regular-replica3 - 32769 11 0 0 0 71 54 35 32770
total used 32791136 281
total avail 155860384
total space 188651520
[root@test ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
| 7e4238f9-b0bd-4b03-840f-d284b46d20e2 | available | testregular | 1 | Regular-Volumes | true | |
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
- Now Create Instance with this bootable volume : testregular.
Select Launch Instance
Instance Name = give instance name
Select Instance Boot Source = Boot From volume
Select Volume = testregular
and then Launch.
[root@test ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
| 7e4238f9-b0bd-4b03-840f-d284b46d20e2 | available | testregular | 1 | Regular-Volumes | true | 2a9574be-c5cd-4f96-93ae-ba2f1ba28bfd |
+--------------------------------------+-----------+--------------+------+-----------------+----------+--------------------------------------+
- We can follow same steps for volume type : SSD-Volumes.
Root Cause
- Currently nova does not support multi-backend like cinder has it.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.