Deploying Red Hat Ceph Storage 2 as a Container Image

Updated

You can deploy Red Hat Ceph Storage 1.3.2 and later as a container image. This article describes how to do that for Red Hat Ceph Storage 2. For details on deploying Red Hat Ceph Storage 1.3, see Deploying Red Hat Ceph Storage 1.3 as a Container Image (Technology Preview). For details on deploying Red Hat Ceph Storage 3, see the Container Guide.

IMPORTANT

  • Only container images based on Red Hat Ceph Storage 2.3 and higher are supported in production. The images based on previous versions of Red Hat Ceph Storage are still a Technology Preview and as such, they are not supported. For more information, see Technology Preview Features Support Scope.

  • Running kernel drivers in a privileged container is a target of further development and as such is not yet supported. These include the kernel RADOS Block Device (kRBD), the CephFS FUSE driver, and the CephFS kernel driver, in addition to existing technology previews.

This article describes how to:

Deploying Red Hat Ceph Storage 2 as a Container Image

Use the Ansible application with the ceph-ansible playbook to deploy Red Hat Ceph Storage 2 as a container image.

NOTE: A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.

Prerequisites

  • Follow the procedures in the Prerequisites chapter in the Red Hat Ceph Storage 2 Installation Guide for Red Hat Enterprise Linux.

  • Enable the rhel-7-server-extras-rpms repository:

    # subscription-manager repos --enable=rhel-7-server-extras-rpms
    
  • Install the ceph-ansible package.

    1. Enable the Red Hat Ceph Storage 2 Tools repository:
      	# subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms
    
    1. Install ceph-ansible:
     		# yum install ceph-ansible
    

Procedure

  1. In the user’s home directory, create the ceph-ansible-keys directory where Ansible stores temporary values generated by the ceph-ansible playbook.

    $ mkdir ~/ceph-ansible-keys
    
  2. Navigate to the /usr/share/ceph-ansible/ directory:

    $ cd /usr/share/ceph-ansible
    
  3. Create a new copies of the yml.sample files located on the group_vars directory:

    # cp group_vars/all.yml.sample group_vars/all.yml
    # cp group_vars/osds.yml.sample group_vars/osds.yml
    # cp group_vars/rgws.yml.sample group_vars/rgws.yml
    # cp site-docker.yml.sample site-docker.yml
    

    NOTE: Copy the rgws.yml file only if you want to deploy the Ceph Object Gateway, too.

  4. Edit the copied files.

    a) In the group_vars/all.yml file, uncomment or add the following variables and set their appropriate values as follows.

    monitor_interface: [interface]
    radosgw_interface: [interface] 
    journal_size: 5120
    public_network: [ip-address/netmask]
    ceph_docker_image: rhceph/rhceph-2-rhel7
    ceph_docker_registry: registry.access.redhat.com 
    containerized_deployment: true
    

    Replace [interface] with the interface that the Monitor nodes listen to. In addition, specify the IP address and the netmask of the Ceph public network.

    An example of the all.yml file can look like:

    monitor_interface: eth0
    radosgw_interface: eth0
    journal_size: 5120
    monitor_interface: eth0
    public_network: 192.168.0.0/24
    ceph_docker_image: rhceph/rhceph-2-rhel7
    ceph_docker_registry: registry.access.redhat.com
    containerized_deployment: true
    

    For additional details, see the all.yml file.

    b) In the group_vars/osds.yml file, choose from the following variables and their values based on what scenario you want to deploy.

    osd_scenario: collocated|non-collocated
    osd_auto_discovery: true
    devices: [list]    
    dedicated_devices: [list]
    dmcrypt:true|false
    osd_objectstore:filestore|bluestore
    

    Set the osd_scenario: collocated variable to use the same device for journal and OSD data.

    Set the osd_scenario: non-collocated variable to use a dedicated device to store journal data. In addition, specify the dedicated devices in the dedicated_devices variable.

    Set the devices variable to specify a list of devices. Set the osd_auto_discovery: true variable to instruct Ceph to automatically discover OSD devices. Use osd_auto_discovery: true only with osd_scenario: collocated.

    Set the dmcrypt: true variable to encrypt OSDs.

    Set the osd_objecstore variable to filestore or bluestore based on what OSD back end you want to use. NOTE: The BlueStore OSD back end is provided as a Technology Preview and as such it is not fully supported. For more information, see Technology Preview Features Support Scope.

    NOTE: osd_auto_discovery and devices cannot be used in conjunction.

    An example of the osds.yml file can look like:

    osd_scenario: collocated
    devices:
        - /dev/sda
        - /dev/sdb
    dmcrypt: true
    

    For additional details, see the osds.yml file.

    c) If you want to deploy the Ceph Object Gateway, edit the variables in the group_vars/rgws.yml file. For additional details, see the rgws.yml file. In addition, add the following variable to the group_vars/all.yml file:

    radosgw_interface: [interface]
    

    Replace [interface] with the interface that the Ceph Object Gateway node uses.

  5. Edit the Ansible inventory file located by default at /etc/ansible/hosts. Alternatively, create a new file and then specify it by using the -i parameter with the ansible-playbook command. Add the Monitor, Object Storage Device (OSD), and Ceph Object Gateway nodes, for example:

        [mons]
            monitor01
            monitor02
            monitor03
    
        [osds]
            osd01
            osd02
            osd03
        [rgws]
            rgw01
    

    NOTE: To change the section names, such as mons or osds, edit the mon_group_name and osd_group_name parameters in the group_vars/all file. For example:

    mon_group_name: monitors
    osd_group_name: object_storage_daemons
    
  6. Run the ceph-ansible playbook:

    $ ansible-playbook site-docker.yml
    

    If you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the --skip-tags=with_pkg option:

    $ ansible-playbook --skip-tags=with_pkg site-docker.yml
    

Verifying That Ceph Nodes Work Properly

Monitors and OSD nodes:

  1. Connect to a monitor:

    $ ssh [hostname]
    

    Replace [hostname] with the host name of the Monitor node:

    $ ssh monitor01
    
  2. Check the health of the Ceph cluster:

    # docker exec ceph-mon-[hostname] ceph health
    

    Replace [hostname] with the host name of the Ceph Monitor:

    # docker exec ceph-mon-monitor01 ceph health
    

    The command returns the HEALTH_OK message if the Ceph cluster works properly.

Ceph Object Gateway nodes:

  1. Connect to a Monitor node:

    $ ssh [hostname]
    

    Replace [hostname] with the host name of the Monitor node:

    $ ssh monitor01
    
  2. Verify that the Ceph Object Gateway pools were created properly:

    # docker exec ceph-mon-[hostname] rados lspools
    

    Replace [hostname] with the host name of the Ceph Monitor:

    # docker exec ceph-mon-monitor01 rados lspools                               
    rbd
    cephfs_data
    cephfs_metadata
    .rgw.root
    default.rgw.control
    default.rgw.data.root
    default.rgw.gc
    default.rgw.log
    default.rgw.users.uid
    
  3. From any client on the same network as the Ceph cluster, for example the Monitor node, use the curl command to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:

    $ curl http://[ip-address]:8080
    

    Replace [ip-address] with the IP address of the Ceph Object Gateway node, for example:

    curl http://192.168.0.0:8080
    

    To determine the IP address of the Ceph Object Gateway host, use the ifconfig or ip commands.

  4. Additionally, list buckets:

    # docker exec ceph-mon-[hostname] radosgw-admin bucket list
    

    Replace [hostname] with the host name of the Ceph Monitor:

    # docker exec ceph-mon-monitor01 radosgw-admin bucket list
    

Starting, Stopping, and Restarting Ceph Daemons that Run in a Container

To start, stop, or restart a Ceph daemon running in a container:

# systemctl [action] [daemon]@[service].[ID]

Where:

  • [action] is the action to perform; start, stop, or restart
  • [daemon] is the Ceph daemon; ceph-osd, ceph-mon, or ceph-radosgw
  • [service]is the Ceph service; osd, mon, or rgw
  • [ID] is either
    • The device name that the ceph-osd daemon uses
    • The short host name where the ceph-mon or ceph-radosgw daemons are running

Example Commands

To restart a ceph-osd daemon that uses the /dev/sdb device:

# systemctl restart ceph-osd@osd.sdb

To start a ceph-mon demon that runs on the monitor01 host:

# systemctl start ceph-mon@mon.monitor01

To stop a ceph-radosgw daemon that runs on the rgw01 host:

# systemctl stop ceph-radosgw@rgw.rgw01

NOTE

In previous releases of Red Hat Ceph Storage, the aforementioned commands used a different format:

# systemctl [action] [daemon]@[ID]

Where:

  • [action] is the action to perform; start, stop, or restart
  • [daemon] is the Ceph daemon; ceph-osd, ceph-mon, or ceph-rgw
  • [ID] is either
    • The device name that the ceph-osd daemon uses
    • The short host name where the ceph-mon or ceph-rgw daemons are running

Note especially, that ceph-rgw was used instead of ceph-radosgw.

See Also

Viewing Log Files of Ceph Daemons

To view the entire Ceph log file from a container, use the journald daemon from the container host:

# journalctl -u [daemon]@[service].[ID]

Where:

  • [daemon] is the Ceph daemon; ceph-osd, ceph-mon, or ceph-radosgw
  • [service]is the Ceph service; osd, mon, or rgw
  • [ID] is either
    • The device name that the ceph-osd daemon uses
    • The short host name where the ceph-mon or ceph-radosgw daemons are running

To show only the recent journal entries, use the -f option:

# journalctl -fu [daemon]@[service].[ID]

Example Commands

To view the entire log for the ceph-osd daemon that uses the /dev/sdb device:

# journalctl -u ceph-osd@osd.sdb

To view only recent journal entries for the ceph-mon daemon that runs on the monitor01 host:

# journalctl -u ceph-mon@mon.monitor01

NOTE

In previous releases of Red Hat Ceph Storage, the aforementioned commands used a different format:

# journalctl -u [daemon]@[ID]

Where:

  • [daemon] is the Ceph daemon; ceph-osd, ceph-mon, or ceph-rgw
  • [ID] is either
    • The device name that the ceph-osd daemon uses
    • The short host name where the ceph-mon or ceph-rgw daemons are running

Note especially, that ceph-rgw was used instead of ceph-radosgw.

Purging a Ceph Cluster That Was Created by Using Ansible

To remove all packages, containers, configuration files, and all the data created by the ceph-ansible playbook:

$ ansible-playbook purge-docker-cluster.yml

To specify a different inventory file than the default one (/etc/ansible/hosts), use -i parameter:

$ ansible-playbook purge-docker-cluster.yml -i [inventory-file]

Replace [inventory-file] with the path to the inventory file.

To skip the removal of the Ceph container image, use the --skip-tags=”remove_img” option:

$ ansible-playbook --skip-tags="remove_img" purge-docker-cluster.yml

To skip the removal of the packages that were installed by ceph-ansible, use the --skip-tags=”with_pkg” option:

$ ansible-playbook --skip-tags="with_pkg" purge-docker-cluster.yml

Additional Resources

Category
Article Type