How to configure OpenStack Glance , Nova, and Cinder storage to use Ceph RBD as the backend?
Environment
- Red Hat Ceph Storage 1.2.3
- Red Hat Ceph Storage 1.3
- Red Hat Ceph Storage 1.3.1
- Red Hat Ceph Storage 1.3.2
- Red Hat Enterprise Linux Openstack Platform 6 (Juno)
- Red Hat Enterprise Linux Openstack Platform 7(Kilo)
Issue
- How to configure OpenStack Glance, Nova, and Cinder storage to use Ceph RBD as the backend?
- Configure OpenStack nodes as Ceph Clients.
Resolution
Important Note : The pg_num value set to 128 in ceph osd pool create is just for reference. This number should be decided as per the environment. Red Hat has a This content is not included.Ceph Placement Groups (PGs) per Pool Calculator which can be used to arrive at the appropriate number.
- The nodes running glance-api, cinder-volume, nova-compute and cinder-backup act as Ceph clients. Each requires the ceph.conf file:
# ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
GLANCE :
- Install Ceph client packages on the glance-api node.
# yum install ceph-common
- Setup Ceph Client Authentication and ceph pool on one of ceph storage node. If you have
cephxauthentication enabled, create a new user for Glance. Execute the following:
# ceph osd pool create images 128
# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
- Add the keyrings for client.glance to the glance api node and change their owner:group to glance:glance:
# ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
#ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
- Edit
/etc/glance/glance-api.confand add under the[glance_store]section in Juno:
[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
- Edit
/etc/glance/glance-api.confand add under the[glance_store]section in Kilo:
[glance_store]
stores=glance.store.rbd.Store,glance.store.http.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
- Restart
glance-apiservice :
$ service openstack-glance-api restart
NOVA :
nova-computeuse both the Python bindings and the client command line tools:
# yum install ceph-common
- Setup Ceph Client Authentication and ceph pool on one of ceph storage node
# ceph osd pool create vms 128
- Nodes running nova-compute need the keyring file for the nova-compute process:
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
# ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
-
They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder.
-
Create a temporary copy of the secret key on the nodes running nova-compute:
# ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
- On the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
- Save the uuid of the secret for configuring nova-compute later.
Important Note: You don’t necessarily need the UUID on all the compute nodes. However from a platform consistency perspective, it’s better to keep the same UUID.
- From Juno, Ceph block device was moved under the [libvirt] section. On every Compute node, edit /etc/nova/nova.conf under the [libvirt] section and add:
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
-
It is also a good practice to disable file injection. While booting an instance, Nova usually attempts to open the rootfs of the virtual machine. Then, Nova injects values such as password, ssh keys etc. directly into the filesystem. However, it is better to rely on the metadata service and cloud-init.
-
On every Compute node, edit /etc/nova/nova.conf and add the following under the [libvirt] section:
inject_password = false
inject_key = false
inject_partition = -2
- To ensure a proper live-migration, use the following flags (under the [libvirt] section):
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
- Restart
nova-computeservice :
# service openstack-nova-compute restart
CINDER :
- cinder-backup and on the cinder-volume node, use both the Python bindings and the client command line tools:
# yum install ceph-common
- If you have cephx authentication enabled, create a new user for Cinder and Cinder-backup and and ceph pools one for cinder and one for cinder-backup on one of ceph storage node execute the following:
# ceph osd pool create volumes 128
# ceph osd pool create backups 128
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
- Nodes running
cinder-volumeneed the keyring file for thecinder-volumeprocess:
# ceph auth get-or-create client.cinder | ssh {your-cinder-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
#ssh {your-cinder-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
- Nodes running
cinder-backupneed the keyring file for thecinder-backupprocess:
# ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
#ssh {your-cinder-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
- OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit
/etc/cinder/cinder.confby adding:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
- If you’re using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier:
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
-
Note that if you are configuring multiple cinder back ends, glance_api_version = 2 must be in the [DEFAULT] section.
-
Configuring Cinder Backup : OpenStack Cinder Backup requires a specific daemon so don’t forget to install it. On your Cinder Backup node, edit
/etc/cinder/cinder.confand add:
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
- Restart
cinder-volumeandcinder-backupservice :
# service openstack-cinder-volume restart
# service openstack-cinder-backup restart
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.