Setting MaxRAM on DG 8 pod
Environment
- OpenJDK 8 (cgroups v1), 11, 17 (cgroups v2)
- Red hat OpenShift Container Platform (OCP)
- 4.x
- Red Hat Data Grid (RHDG)
- 8.x
Issue
- How to set MaxRAM on DG 8 pod?
- Should we set MaxRAM on DG 8?
- How the DG 8 pod calculates the max heap from the pod size?
Resolution
Java/DG process is container aware the user does not need to set MaxRAM; Setting MaxRAM should be avoided.
Java is container aware - so cgroups already detects the max memory size of the container (so total size == pod; DG pods have only one container).
Meaning: as explained on Usage of Java flags InitialRAMPercentage and MaxRAMPercentage and Should -XX:+UseContainerSupport flag be used in OpenJDK images? the JVM will know the total max size of the container based on the cgroups. Meaning the pod's JVM is container aware.
In this case in a container (cgroups v1 and v2) with ubi8 image the heap will be 50% (set via Xmx) and in a ubi9 container it will be 80% (set via Xmx).
Therefore, by default, the heap will take the 50% (or 80%) of the pod image, regardless of the size of the pod, and it should be used given it escalates for any pod size setting only the memory resource limits on the deployment or custom CR (infinispan CR for example). Using the container awareness property.
In the other hand, the off-heap will be calculated as percentage of the total memory - heap, so for 50% of heap, the remainder 50% will be the off heap. Overwriting the default via Xmx can give a heap larger or smaller than 50% of pod, however, the off-heap (native) will still continue to have the remainder percentage.
Although not recommended (and not optimalin 99% of the cases, given it leads to a decoupling of the heap and the container size), in some very specific cases where cgroups OOME Killer acts on pod killing (not system OOME Killer), it can be possible to set the max heap+off-heap for less than the max pod resource size avoiding cgroups OOME kill/interruptions on the pods. For this behavior set MaxRAM jvm flag, which sets the JVM max heap+off-heap - ie. setting the memory limit hard-coded, for a value that is lower than the limit of the pod.
The major implication is: decoupling pod resources vs DG 8 java process. This means that upon later change, the size of the heap+off-heap of the Java process will continue to be the one set (hard coded).
Warning that this is not optimal and the difference (between the pod size and the heap+off heap won't be used for other processes, given that DG 8 has only one java process - the shell/bash connections will use resources but not huge resource like java.
When the heap crosses the JVM limits the process will exit (if it has ExitOnOutOfMemory flag) and the pod will exit.
Related solutions:
| Solution | Purpose |
|---|---|
| How to set QoS on DG 8 pods in OCP 4 | Setting QoS on the pod |
| Guideliness for customizing JVM flags in DG 8 images | For Xmx, MaxRAM and off-heap discussion |
| Deprecated service type Cache in DG 8 in OCP 4 | Discussion about deprecated type cache and its Serial GC collector |
| Usage of Java flags InitialRAMPercentage and MaxRAMPercentage and Should -XX:+UseContainerSupport flag be used in OpenJDK images? | About container awereness |
Root Cause
First, MaxRAM will set with the service type Cache, but not DataGrid, as explained on Deprecated service type Cache in DG 8 in OCP 4.
Java is already container aware so it is not necessary to set this value, although not recommended nor optimal (from a resources management point of view) the flag MaxRAM can be used to limit any java process total memory usage (heap+off-heap). However, this leads to a decoupling of pod's resource and the java process, which consequently can cause trouble later on.
What happens when setting MaxRAM
If MaxRAM is set, this will be the defacto container limit so it will cause the Xmx to be calculated from this value. For example, DG/JWS/EAP images bring 50% container as heap size, however when setting MaxRAM this will be the reference for the 50%.
Additional references
Secondly see the Guideliness for customizing JVM flags in DG 8 images for details on cgroups, xmx usage, and total ram usage.
Finally, for ubi 8/9 percentage change see UBI 9 OpenJDK images have MaxPercentage is set at 50/50.
Diagnostic Steps
Examples:
$ java -XX:+PrintFlagsFinal -version | grep -i MaxRAM
uint64_t MaxRAM = 137438953472 {pd product} {default}
uintx MaxRAMFraction = 4 {product} {default}
double MaxRAMPercentage = 25.000000 {product} {default}
openjdk version "11.0.12" 2021-07-20 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, sharing)
On the example it will show 137Gb given it is the max value and it is not set.
MaxRAM=3G - example:
$ java -XX:MaxRAM=3G -XX:+PrintFlagsFinal -version | grep -i MaxRAM
uint64_t MaxRAM = 3221225472 {pd product} {command line}
uintx MaxRAMFraction = 4 {product} {default}
double MaxRAMPercentage = 25.000000 {product} {default}
openjdk version "11.0.12" 2021-07-20 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, sharing)
MaxRAM: 300M:
$ java -XX:MaxRAM=300M -XX:+PrintFlagsFinal -version | grep -i MaxRAM
uint64_t MaxRAM = 314572800 {pd product} {command line}
uintx MaxRAMFraction = 4 {product} {default}
double MaxRAMPercentage = 25.000000 {product} {default}
openjdk version "11.0.12" 2021-07-20 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.