Performance tuning guide for Multicloud Object Gateway (NooBaa)
Issue
OpenShift Data Foundation's default configuration for Multicloud Object gateway is optimized for low resource consumption and not performance.
Resolution
Considerations
-
Large files
In that case, the metadata to the data ratio is low. Increasing the resources for the endpoints (Memory and CPU) and the number of endpoints would be the first thing to do. In the case of Namespace buckets, only mem would be sufficient. The CPU is important mainly for data buckets, where the endpoints use the CPU for encryption and deduplication. -
Small objects
In that case, the metadata to data ratio is high. For data buckets, this means a high involvement of core and DB. Increasing those pods resources would be the first step. Endpoints memory would probably not be pressured if the core and DB respond quickly. If they don't respond quickly enough, then a back pressure will be built and endpoints will eventually be under pressure. as well. In this case, we would want to increase both core and DB, with more emphasis on the DB itself. -
High amount of configuration entities such as large number of buckets, accounts
This would also point to the DB and core, with more emphasis on the core.
When using namespace buckets, increasing the endpoint's memory and the DB's memory and CPU would be the first step.
As mentioned above, the main variables that would impact the performance of Multicloud Object Gateway (MCG), ordered by impact:
- MCG database resources - You need to increase CPU and memory per the workload characteristics.
- MCG auto scale min/max size - That would improve the response to peaks, but it also has a delay till it kicks in, hence it's important to set both minimal and maximum size.
- MCG Core resources - You need to increase CPU and memory per the workload characteristics.
- Make sure you connect to the NooBaa endpoint using its service address "https://s3.openshift-storage.svc" or "http://s3.openshift-storage.svc" since this connects directly to the NooBaa endpoints
- You can adjust the auto-scaling with a command like this:
oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{"spec": {"multiCloudGateway": {"endpoints": {"minCount": 3,"maxCount": 10}}}}'
This would set the NooBaa Endpoint Horizontal Pod Autoscaling to deploy at least 3 Pods and scale up to 10 Pods when needed. The default is to deploy at least 1 Pod and at most 2 Pods.
- Tuning MCG core and database resources can be done via the storage cluster CR.
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
creationTimestamp: "2022-01-15T13:40:42Z"
generation: 4
name: ocs-storagecluster
namespace: openshift-storage
resourceVersion: "29406833"
selfLink: /apis/ocs.openshift.io/v1/namespaces/openshift-storage/storageclusters/ocs-storagecluster
uid: 9f970119-379c-11ea-949c-02bb7e7e425c
spec:
manageNodes: false
resources:
noobaa-core:
limits:
cpu: "3" <-----
memory: "4Gi" <-----
requests:
cpu: "3" <-----
memory: "4Gi" <-----
noobaa-db:
limits:
cpu: "3" <-----
memory: "4Gi" <-----
requests:
cpu: "3" <-----
memory: "4Gi" <-----
noobaa-endpoint:
limits:
cpu: "3" <-----
memory: 4Gi <-----
requests:
cpu: "3" <-----
memory: 4Gi <-----
storageDeviceSets:
.
.
.
You can apply the above values by executing this command:
oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{"spec": {"resources": {"noobaa-core": {"limits": {"cpu": "3","memory": "4Gi"},"requests": {"cpu": "3","memory": "4Gi"}},"noobaa-db": {"limits": {"cpu": "3","memory": "4Gi"},"requests": {"cpu": "3","memory": "4Gi"}},"noobaa-endpoint": {"limits": {"cpu": "3","memory": "4Gi"},"requests": {"cpu": "3","memory": "4Gi"}}}}}'
If you only need to increase certain resources, this patch command will suffice (example for the noobaa-endpoint):
oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{"spec": {"resources": {"noobaa-endpoint": {"limits": {"cpu": "4","memory": "6Gi"},"requests": {"cpu": "4","memory": "6Gi"}}}}}'
- When using PV pool backing store, you may not have the expected performance due to low default values. To change it, open the OpenShift console -> Storage -> Object Storage -> Backing Store -> Select the relevant backing store and click on YAML.
Look for spec->pvPool and update the requests with CPU and memory. Add a new property of limits and add cpu and memory as well. Example
spec:
pvPool:
resources:
limits:
cpu: 1000m
memory: 4000Mi
requests:
cpu: 800m
memory: 800Mi
storage: 50Gi
Alternatively, use oc patch command
oc patch BackingStore <backing store name> --type='merge' -p '{
"spec": {
"pvPool": {
"resources": {
"limits": {
"cpu": "1000m",
"memory": "4000Mi"
},
"requests": {
"cpu": "500m",
"memory": "500Mi"
}
}
}
}
}'
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.