Should I use Ceph or ODF to back etcd for my Openshift Cluster?

Updated

Ceph, NFS, and spinning disks are not recommended. etcd is not a workload best suited running on Red Hat Ceph Storage.

Whether deploying Openshift on Openstack, or on another infrastructure, the control plane must meet known resource requirements to run the etcd key-value store. Fast disks are a requirement for etcd stability and performance. Red Hat requires that the control plane has access to fast disks (enterprise class SSD or better) to ensure stability and guarantee supportability.
It is highly recommended that you use etcd with storage that handles synchronous writes quickly, such as NVMe or SSD.

If it is necessary to run etcd on Ceph or ODF, the following requirements should be met to ensure the environment is stable and reliable.

  • Ensure the Red Hat Ceph Storage Cluster is running Content from www.ibm.com is not included.IBM BlueStore
  • Use SSD/NVMe class media instead of HDDs
  • Use a smaller NVMe or SSD disk size, as this provides more IOPs
  • Use a dedicated pool for the etcd workload; do not share this pool or the OSDs backing the pool with any other applications
  • OSDs should be dedicated to ONLY the etcd pool. These OSDs should not share any other ruleset with other pools
  • Use replication size 2 and min_size 1
  • The network capacity for the OSD nodes should be equal to disk performance, for NVMe specifically
  • Ensure proper CPU allocation per OSD

Review the following Recommended etcd practices for additional details on performance requirements and key metrics to monitor for etcd performance. Also as a reference Optimizing storage for scalability and performance with OpenShift should be reviewed.

Please reach out to Red Hat Support for any questions in relation to this issue.

Article Type