Deploying Red Hat Ceph Storage 2 as a Container Image
You can deploy Red Hat Ceph Storage 1.3.2 and later as a container image. This article describes how to do that for Red Hat Ceph Storage 2. For details on deploying Red Hat Ceph Storage 1.3, see Deploying Red Hat Ceph Storage 1.3 as a Container Image (Technology Preview). For details on deploying Red Hat Ceph Storage 3, see the Container Guide.
IMPORTANT
-
Only container images based on Red Hat Ceph Storage 2.3 and higher are supported in production. The images based on previous versions of Red Hat Ceph Storage are still a Technology Preview and as such, they are not supported. For more information, see Technology Preview Features Support Scope.
-
Running kernel drivers in a privileged container is a target of further development and as such is not yet supported. These include the kernel RADOS Block Device (kRBD), the CephFS FUSE driver, and the CephFS kernel driver, in addition to existing technology previews.
This article describes how to:
- Deploy Red Hat Ceph Storage as a Container Image
- Verify That the Ceph Nodes Work Properly
- Start, Stop, or Restart Ceph Daemons that Run in Containers
- View Log Files of Ceph Daemons that Run in Containers
- Purge a Ceph Cluster Deployed as a Container Image
Deploying Red Hat Ceph Storage 2 as a Container Image
Use the Ansible application with the ceph-ansible playbook to deploy Red Hat Ceph Storage 2 as a container image.
NOTE: A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.
Prerequisites
-
Follow the procedures in the Prerequisites chapter in the Red Hat Ceph Storage 2 Installation Guide for Red Hat Enterprise Linux.
-
Enable the rhel-7-server-extras-rpms repository:
# subscription-manager repos --enable=rhel-7-server-extras-rpms -
Install the
ceph-ansiblepackage.- Enable the Red Hat Ceph Storage 2 Tools repository:
# subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms- Install
ceph-ansible:
# yum install ceph-ansible
Procedure
-
In the user’s home directory, create the
ceph-ansible-keysdirectory where Ansible stores temporary values generated by theceph-ansibleplaybook.$ mkdir ~/ceph-ansible-keys -
Navigate to the
/usr/share/ceph-ansible/directory:$ cd /usr/share/ceph-ansible -
Create a new copies of the
yml.samplefiles located on thegroup_varsdirectory:# cp group_vars/all.yml.sample group_vars/all.yml # cp group_vars/osds.yml.sample group_vars/osds.yml # cp group_vars/rgws.yml.sample group_vars/rgws.yml # cp site-docker.yml.sample site-docker.ymlNOTE: Copy the
rgws.ymlfile only if you want to deploy the Ceph Object Gateway, too. -
Edit the copied files.
a) In the
group_vars/all.ymlfile, uncomment or add the following variables and set their appropriate values as follows.monitor_interface: [interface] radosgw_interface: [interface] journal_size: 5120 public_network: [ip-address/netmask] ceph_docker_image: rhceph/rhceph-2-rhel7 ceph_docker_registry: registry.access.redhat.com containerized_deployment: trueReplace
[interface]with the interface that the Monitor nodes listen to. In addition, specify the IP address and the netmask of the Ceph public network.An example of the
all.ymlfile can look like:monitor_interface: eth0 radosgw_interface: eth0 journal_size: 5120 monitor_interface: eth0 public_network: 192.168.0.0/24 ceph_docker_image: rhceph/rhceph-2-rhel7 ceph_docker_registry: registry.access.redhat.com containerized_deployment: trueFor additional details, see the
all.ymlfile.b) In the
group_vars/osds.ymlfile, choose from the following variables and their values based on what scenario you want to deploy.osd_scenario: collocated|non-collocated osd_auto_discovery: true devices: [list] dedicated_devices: [list] dmcrypt:true|false osd_objectstore:filestore|bluestoreSet the
osd_scenario: collocatedvariable to use the same device for journal and OSD data.Set the
osd_scenario: non-collocatedvariable to use a dedicated device to store journal data. In addition, specify the dedicated devices in thededicated_devicesvariable.Set the
devicesvariable to specify a list of devices. Set theosd_auto_discovery: truevariable to instruct Ceph to automatically discover OSD devices. Useosd_auto_discovery: trueonly withosd_scenario: collocated.Set the
dmcrypt: truevariable to encrypt OSDs.Set the
osd_objecstorevariable tofilestoreorbluestorebased on what OSD back end you want to use. NOTE: The BlueStore OSD back end is provided as a Technology Preview and as such it is not fully supported. For more information, see Technology Preview Features Support Scope.NOTE:
osd_auto_discoveryanddevicescannot be used in conjunction.An example of the
osds.ymlfile can look like:osd_scenario: collocated devices: - /dev/sda - /dev/sdb dmcrypt: trueFor additional details, see the
osds.ymlfile.c) If you want to deploy the Ceph Object Gateway, edit the variables in the
group_vars/rgws.ymlfile. For additional details, see thergws.ymlfile. In addition, add the following variable to thegroup_vars/all.ymlfile:radosgw_interface: [interface]Replace
[interface]with the interface that the Ceph Object Gateway node uses. -
Edit the Ansible inventory file located by default at
/etc/ansible/hosts. Alternatively, create a new file and then specify it by using the-iparameter with theansible-playbookcommand. Add the Monitor, Object Storage Device (OSD), and Ceph Object Gateway nodes, for example:[mons] monitor01 monitor02 monitor03 [osds] osd01 osd02 osd03 [rgws] rgw01NOTE: To change the section names, such as
monsorosds, edit themon_group_nameandosd_group_nameparameters in thegroup_vars/allfile. For example:mon_group_name: monitors osd_group_name: object_storage_daemons -
Run the
ceph-ansibleplaybook:$ ansible-playbook site-docker.ymlIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkgoption:$ ansible-playbook --skip-tags=with_pkg site-docker.yml
Verifying That Ceph Nodes Work Properly
Monitors and OSD nodes:
-
Connect to a monitor:
$ ssh [hostname]Replace
[hostname]with the host name of the Monitor node:$ ssh monitor01 -
Check the health of the Ceph cluster:
# docker exec ceph-mon-[hostname] ceph healthReplace
[hostname]with the host name of the Ceph Monitor:# docker exec ceph-mon-monitor01 ceph healthThe command returns the
HEALTH_OKmessage if the Ceph cluster works properly.
Ceph Object Gateway nodes:
-
Connect to a Monitor node:
$ ssh [hostname]Replace
[hostname]with the host name of the Monitor node:$ ssh monitor01 -
Verify that the Ceph Object Gateway pools were created properly:
# docker exec ceph-mon-[hostname] rados lspoolsReplace
[hostname]with the host name of the Ceph Monitor:# docker exec ceph-mon-monitor01 rados lspools rbd cephfs_data cephfs_metadata .rgw.root default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.users.uid -
From any client on the same network as the Ceph cluster, for example the Monitor node, use the
curlcommand to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:$ curl http://[ip-address]:8080Replace
[ip-address]with the IP address of the Ceph Object Gateway node, for example:curl http://192.168.0.0:8080To determine the IP address of the Ceph Object Gateway host, use the
ifconfigoripcommands. -
Additionally, list buckets:
# docker exec ceph-mon-[hostname] radosgw-admin bucket listReplace
[hostname]with the host name of the Ceph Monitor:# docker exec ceph-mon-monitor01 radosgw-admin bucket list
Starting, Stopping, and Restarting Ceph Daemons that Run in a Container
To start, stop, or restart a Ceph daemon running in a container:
# systemctl [action] [daemon]@[service].[ID]
Where:
[action]is the action to perform;start,stop, orrestart[daemon]is the Ceph daemon;ceph-osd,ceph-mon, orceph-radosgw[service]is the Ceph service;osd,mon, orrgw[ID]is either- The device name that the
ceph-osddaemon uses - The short host name where the
ceph-monorceph-radosgwdaemons are running
- The device name that the
Example Commands
To restart a ceph-osd daemon that uses the /dev/sdb device:
# systemctl restart ceph-osd@osd.sdb
To start a ceph-mon demon that runs on the monitor01 host:
# systemctl start ceph-mon@mon.monitor01
To stop a ceph-radosgw daemon that runs on the rgw01 host:
# systemctl stop ceph-radosgw@rgw.rgw01
NOTE
In previous releases of Red Hat Ceph Storage, the aforementioned commands used a different format:
# systemctl [action] [daemon]@[ID]
Where:
[action]is the action to perform;start,stop, orrestart[daemon]is the Ceph daemon;ceph-osd,ceph-mon, orceph-rgw[ID]is either- The device name that the
ceph-osddaemon uses - The short host name where the
ceph-monorceph-rgwdaemons are running
- The device name that the
Note especially, that ceph-rgw was used instead of ceph-radosgw.
See Also
- The Running Ceph as a systemd Service section in the Administration Guide for Red Hat Ceph Storage
Viewing Log Files of Ceph Daemons
To view the entire Ceph log file from a container, use the journald daemon from the container host:
# journalctl -u [daemon]@[service].[ID]
Where:
[daemon]is the Ceph daemon;ceph-osd,ceph-mon, orceph-radosgw[service]is the Ceph service;osd,mon, orrgw[ID]is either- The device name that the
ceph-osddaemon uses - The short host name where the
ceph-monorceph-radosgwdaemons are running
- The device name that the
To show only the recent journal entries, use the -f option:
# journalctl -fu [daemon]@[service].[ID]
Example Commands
To view the entire log for the ceph-osd daemon that uses the /dev/sdb device:
# journalctl -u ceph-osd@osd.sdb
To view only recent journal entries for the ceph-mon daemon that runs on the monitor01 host:
# journalctl -u ceph-mon@mon.monitor01
NOTE
In previous releases of Red Hat Ceph Storage, the aforementioned commands used a different format:
# journalctl -u [daemon]@[ID]
Where:
[daemon]is the Ceph daemon;ceph-osd,ceph-mon, orceph-rgw[ID]is either- The device name that the
ceph-osddaemon uses - The short host name where the
ceph-monorceph-rgwdaemons are running
- The device name that the
Note especially, that ceph-rgw was used instead of ceph-radosgw.
Purging a Ceph Cluster That Was Created by Using Ansible
To remove all packages, containers, configuration files, and all the data created by the ceph-ansible playbook:
$ ansible-playbook purge-docker-cluster.yml
To specify a different inventory file than the default one (/etc/ansible/hosts), use -i parameter:
$ ansible-playbook purge-docker-cluster.yml -i [inventory-file]
Replace [inventory-file] with the path to the inventory file.
To skip the removal of the Ceph container image, use the --skip-tags=”remove_img” option:
$ ansible-playbook --skip-tags="remove_img" purge-docker-cluster.yml
To skip the removal of the packages that were installed by ceph-ansible, use the --skip-tags=”with_pkg” option:
$ ansible-playbook --skip-tags="with_pkg" purge-docker-cluster.yml
Additional Resources
- Deploying Red Hat Ceph Storage 1.3 as a Container Image (Technology Preview)
- Updating Red Hat Ceph Storage Deployed as a Container Image
- Red Hat Ceph Storage documentation
- This content is not included.Getting Started with Containers
- Content from docs.ansible.com is not included.Ansible documentation