Red Hat Ceph Storage: Supported configurations
Red Hat Ceph Storage is tested and certified on a number of host operating systems and with a number of client applications. Red Hat provides production and/or development support for supported configurations according to your subscription agreement.
In order to be running in a supported configuration, Red Hat Ceph Storage nodes must be running on one of the operating systems set forth below.
Supported CPU Architectures
RHCS is supported on Intel, AMD x86-64 microprocessors and select ARM64 platforms, including AWS Graviton instances, Azure Cobalt and Altra instances, and Ampere Altra and AmpereOne systems.
Supported Host Operating Systems
RHEL with 24-hour support is included in RHCS at no charge to the customer (RHEL EUS releases are not included, but are available as an option for an additional charge).
Red Hat Ceph Storage is tested and supported on the following host operating systems:
Red Hat Ceph Storage 9
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 9.0: 10.0, 10.1, 9.6, 9.7 |
Red Hat Ceph Storage 8
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 8.0: 9.4, 9.5, 9.6 |
RHCS 8 is deployed containerized only.
Red Hat Ceph Storage 7
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 7.0: 9.2, 9.3, 9.4, 8.10 |
| Red Hat Enterprise Linux | RHCS 7.1: 8.10, 9.4, 9.5, 9.6 |
RHCS 7 is deployed containerized only.
Red Hat Ceph Storage 6
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 6.0: 8.7, 8.8, 9.0, 9.1, 9.2 |
| Red Hat Enterprise Linux | RHCS 6.1: 8.8, 8.9, 8.10, 9.2, 9.3, 9.4, 9.5, 9.6 |
RHCS 6 is deployed containerized only.
Red Hat Ceph Storage 5 (ELS add-on required)
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 5.0: 8.4, 8.4 EUS*, 8.5 |
| Red Hat Enterprise Linux | RHCS 5.1: 8.4, 8.4 EUS*, 8.5, 8.6 |
| Red Hat Enterprise Linux | RHCS 5.2: 8.4 EUS*, 8.5, 8.6, 9.0, 9.1 |
| Red Hat Enterprise Linux | RHCS 5.3: 8.4 EUS*, 8.5, 8.6, 8.7, 8.8, 8.9, 8.10, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6 |
RHCS 5 is deployed containerized only.
Red Hat Ceph Storage 4 (ELS add-on required)
| Vendor | Version |
|---|---|
| Red Hat Enterprise Linux | RHCS 4.3: 8.2 EUS, 8.4 EUS, 8.5, 8.6, 8.7, 8.8 7.9 |
| Red Hat Enterprise Linux | RHCS 4.2z4: 8,2, 8.2 EUS, 8.3, 8.4, 8.5 RHCS 4.2: 8.2, 8.2 EUS, 8.3, 8.4 RHCS 4.1: 8.2, 8.2 EUS, 8.3 RHCS 4.0: 8.1 |
| Red Hat Enterprise Linux | RHCS 4.2: 7.9 RHCS 4.1: 7.8, 7.9 RHCS 4.0: 7.7 |
Supported Deployment Topologies
| Component | Standalone on bare metal | Virtualized* | Container | Co-located Options | Notes |
|---|---|---|---|---|---|
| OSD | Yes | Yes | Yes | A single instance of a containerized scale-out daemon can be co-located on each OSD host. See below for details. Limited Availability with RHOSP Compute nodes (“Hyper-converged OpenStack”). Virtualized with dedicated resource allocation which are not overcommitted per VM. Please contact Red Hat for more details. | Minimum of 3 nodes(solid state technology) or 4 nodes(hard disk technology) required Only directly attached storage is supported. External SAN hardware, connected via FC or iSCSI, is not supported. Please see published Ceph Hardware Configuration Guide for further details. Co-location of of an OS root device with OSD partition/LV is not supported. |
| MON (with MGR) | Yes | Yes | Yes | With OSD in a containerized environment. With RHOSP Controllers up to a maximum of 750 OSDs. With Controllers in the Dell EMC Ready Bundle for Red Hat OpenStack Platform up to a maximum of 1000 OSDs. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. | Minimum of 3 required. Red Hat recommends 5 monitors to be deployed once colocation rules allow it in the available hardware footprint. MGR process should run on the same host as the MON and does not count toward an additional containerized daemon. |
| RGW (including NFS Gateway to RGW and Ingest Service) | Yes | Yes | Yes | With OSD in a containerized environment. RGW can be co-located with one additional scale-out daemon ("cardinality 2"). With RHOSP Controllers. | |
| MDS | Yes | Yes | Yes | With OSD in a containerized environment. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. MDS can be co-located with one additional scale-out daemon ("cardinality 2") . | MDS servers must have an identical configuration. |
| iSCSI Gateway | Yes | Yes | Yes | With OSD in a containerized environment. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. | 2 to 4 iSCSI gateways per Ceph cluster. The iSCSI gateway has been deprecated starting with RHCS 6. Supported use of IGW requires prior special agreement with Red Hat, contact Sales for further details. |
| NVMEoF Gateway | Yes | Yes | Yes | With OSD in a containerized environment. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. requires x86-64-v4 CPUs (Skylake and above, AWX512 support) | 2 to 4 NVMEoF gateways per Ceph cluster. NVMEoF gateways are not supported in RHCS, please contact IBM sales for access to IBM Storage Ceph.** |
| NFS Gateway | Yes | Yes | Yes | With OSD in a containerized environment. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. | 2 to 4 NFS gateways per Ceph cluster. NFS is supported only in combination with the OpenStack Manila service.** |
| SMB Gateway | Yes | Yes | Yes | With OSD in a containerized environment. Virtualized with anti-affinity measures observed and dedicated resource allocation which are not overcommitted per VM. | SMB gateways are not supported in RHCS, please contact IBM sales for access to IBM Storage Ceph.** |
| Dashboard | Yes | Yes | Yes | With OSD in a containerized environment. | |
| RBD Mirror | Yes | Yes | Yes | With OSD in a containerized environment. | |
| Grafana | Yes | Yes | Yes | With OSD in a containerized environment. Grafana can be co-located with one additional scale-out daemon ("cardinality 2"). |
* Covers any hypervisor supported by RHEL.
** Red Hat Ceph Storage is a Storage Product supported in combination with Red Hat OpenStack or OpenShift products. The Content from www.ibm.com is not included.IBM Storage Ceph product, of which RHCS is an OEM version, supports general storage use cases, including NFS NAS or NVMEoF SAN. Contact IBM if intending to use Ceph as a general-purpose storage product. We ship all IBM and Red Hat Ceph products from identical code and build options, but different products have different support scope.
Supported Features
Ceph is a highly scalable system. Customers wishing to exceed the current support guidelines are advised to contact their account management team to obtain guidance from a Solutions Architect.
| Features | Notes |
|---|---|
| Replication | SSD: Replica counts (pool 'size' setting) of 2 or higher are supported and min_size of 1 or higher are supported. Customers are advised to evaluate the Mean Time Between Failure (MTBF) for the particular model of SSD used before using a replica value of 2. HDD: Replica counts (pool 'size' setting) of 3 or higher are supported and min_size of 2 or higher are supported. The choice between replicated, erasure coded, or compressed storage pools is directed by the workload in use. 2 datacenters, stretched cluster: Replica counts (pool 'size' setting) of 4 or higher are supported and min_size of 2 or higher are supported. Only replication is supported in a stretched cluster configuration. |
| Erasure Coding | With 4.0 supported with RGW and RBD. With 5.0 supported on RGW, RBD and CephFS. CephFS and RBD EC use cases are archival only Jerasure plugin only. Supported K/M values: 8+3 8+4 4+2 2+2 Minimum number of nodes is equal to k+m+1 |
| CRUSH Failure Domain | Minimum of 'Host' required for Replication or Erasure Coding configuration to remain fully resilient after the first node loss |
| Instance HA in OpenStack Nova | Use of OpenStack Nova's instance HA functionality is incompatible with hyper-converged deployments. |
| Global Clusters (Multi-site) with RGW | Indexless buckets are not supported with multisite configurations. |
| Maximum storage per node | The cluster must be able to recover from the loss of a single node in less than 8 hours. This can be determined by dividing the storage per node by the rate of recovery. The rate of recovery is 25% of surviving cluster (less one node) aggregate media or network bandwidth, whichever is less. Please use the following This content is not included.recovery calculator to determine recovery time. |
| Disk Size | There are no limits to individual OSD disk sizes. Refer to Maximum storage per node requirements. Shingled magnetic recording (SMR) technology is not supported. |
| Disks per OSD node | There are no limits to host disk count. Refer to Maximum storage per host requirements. |
| OSDs in a cluster | Up to 2,500 (5,000 with a review of the customer's architecture) |
| Block.db size | Minimum 2.5% (version 7.1 and newer) or 4% (version 7 and older) of block for Object, File and mixed Workloads. Minimum 1% of block for RBD/general OpenStack (cinder/glance) workloads. Please see the This content is not included.BlueStore FAQ for further details. |
| RadosGW Index and Log Pool | If the media backing the index and log pools is SSD/NVMe media there is no requirement to partition off a separate 2.5% for block.db, a single block partition (data) would be all that is required. Index and log pools can share the same HDD media that RGW data pool utilizes as long as these HDD OSD's have at least 2.5% of block allocated for block.db on SSD/NVMe device. Please note higher Block.db requirements above for version 7.0 and older. Support exceptions can be reviewed for other use cases and reach out to Red Hat Support with any additional questions. |
| Usable Ratio | Enough free space must be available to allow restoring full resiliency in the event of a node failure. This is the fraction 1/n, where n is the number of storage nodes in the cluster. |
| Snapshots | Up to 512 snapshots per image are supported with RBD (the number is technically unlimited, but the volume's performance is negatively affected). |
| BlueStore | Starting with the 3.2 release. Use of the default allocator is required. Individual storage nodes are required to deploy the same backend across all OSDs. Compression is supported with the "snappy" algorithm. |
| CephFS Snapshots | Starting with 4.1 release. Only global snapshots (on the root directory) are supported, and snapshots on subvolumes via the volumes plugin used by the CSI interface in ODF "external mode". |
| Stretch clustering | Starting with the 4.2 release. When using RHCS in conjunction with ODF ("external mode") to serve OpenShift PVs. A minimum of 2 nodes per site is required, with latency < 10 ms RTT (OSD), < 100 ms RTT (MON). Solid—state or EBS storage with 2+2 replica configuration. Stretch clustering is not a general-purpose Ceph architecture, contact Red Hat Professional Services to discuss support for other use cases. |
| Networking Hardware | Red Hat Ceph Storage does not support InfiniBand network architecture in any mode, including IP over Infiniband. |
| SAN hardware | Red Hat Ceph Storage requires directly attached storage. External SAN hardware, connected via FC or iSCSI, is not supported. SAN hardware can be employed as supported by a VMware environment exclusively to host a virtualized Ceph cluster as a disaster recovery fail-over site. |
| Hot-swappable drives | Hot-swapping a live OSD drive is not supported. |
| Ceph Manager Modules | List of MGR Modules supported/not supported in RHCS. |
| FIPS 140-2 Cryptography | RHCS can be deployed with a FIPS 140-2 certified cryptographic module as supplied by an appropriate version of Red Hat Enterprise Linux installed in FIPS Mode. A modified version of the RHCS containers is required. The NVMEoF Gateway is not currently supported in FIPS mode. |
| Intel QAT Acceleration | Hardware accelerated compression in RGW requires RHEL 9.4 supplied QAT 2.0 drivers running on a Sapphire or Emerald Rapids Xeon CPU (or newer) with one or more QAT hardware devices. Refer to Content from www.intel.com is not included.Intel Ark for further detail. |
Supported Clients
A client is an application that connects to a Red Hat Ceph Storage cluster.
Red Hat will support the version of librbd as shipped with Red Hat Ceph Storage. However, the design, implementation, or debugging of custom client code that uses the library is not supported.
Red Hat will not support the use of librados in custom client code. Customers are advised to contact their account management team for guidance if they wish to use librados in their applications.
Third party products which use Ceph client libraries or Ceph kernel modules not shipped as part of Red Hat Ceph Storage are not supported. However, as long as the client versions are based on the same upstream release of Red Hat Ceph Storage, Red Hat will support and diagnose any Red Hat Ceph Storage cluster issues related with their use.
The Ceph Object Gateway (RGW) supports the same APIs as Amazon S3 or OpenStack Swift. Some API calls are not yet implemented with RGW - please check the documentation for the latest supported API calls. Red Hat will not provide debugging or support for third party clients.
iSCSI Gateway (IGW)
The iSCSI gateway is limited to 255 mapped LUNs per gateway.
| Initiator | Version | Supported |
|---|---|---|
| Windows Server | 2016 | Yes (clustering and Hyper-V are not supported) |
| RHEL | 7.4+ | Yes (clustering not supported), QEMU and KVM are supported |
| RHV | 4.x | Yes |
| VMware ESX | 6.5 and 6.7u3b | Yes (must disable ESXi/vSphere Hypervisor's XCOPY feature; aka HardwareAcceleratedMove) |
| VMware ESX | 6.7 and 7 | Yes (must disable ESXi/vSphere Hypervisor's XCOPY feature; aka HardwareAcceleratedMove), only on RHCS 5. |
The iSCSI Gateway has been deprecated and no longer ships in releases 6 and 7.
For more information on using the Ceph iSCSI gateway, please see the Red Hat Ceph Storage Block Device Guide.
Containerized Deployment
The Ansible-driven conversion of a bare metal cluster to a containerized configuration is supported both as part of the Red Hat OpenStack Platform product and also with the RHCS 4 Product. Contact Red Hat support to discuss migration of a stand-alone bare metal cluster or review the Migrating a Non-Containerized Red Hat Ceph Storage Cluster to a Containerized Environment document.
The combination of containerized cluster nodes with bare metal nodes in a single cluster is not supported unless the cluster is being migrated to containers.