Default Red Hat OpenJDK container JVM settings

Updated

Red Hat's OpenJDK container images bring a few default JVM options such as bring Parallel GC, however it also brings other settings such as MinHeapFreeRatio and MaxHeapFreeRatio as below - this impacts OpenJDK but also SpringBoot applications.

Default OpenJDK images settings

SettingPurposeRed Hat OpenJDK default value
MaxRAMPercentageThe Max heap size, which is a percentage80.0% (80% of the container size will be heap)
+UseParallelGCSets ParallelGC as default GC collector (overwrites default OpenJDK version default)ParallelGC
MinHeapFreeRatioThe percent of free space in a generation minimum threshold. Meaning if falls below this value, then the generation expands to maintain this threshold as free space, up to the maximum allowed size of the generation10%
MaxHeapFreeRatioThe percent of free space in a generation maximum threshold. Meaning if exceeds above this value, then the generation shrinks to maintain this threshold as free space, up to the minimum allowed size of the generation20%
GCTimeRatioRatio to the JVM of the desirable time the application vs GC4 (Four times more in the application than GC)
AdaptiveSizePolicyWeightSets the weight of the previous GC times for calculating the current goals90 (considerably use the previous run for the current goal calculation)
+ExitOnOutOfMemoryErrorExit the JVM upon an OutOfMemory issueenabled

Corrolary

There are a few direct consequences of using those default settings, such as the ExitOnOutOfMemoryError, rather than creating a Heap dump. Or the 80% of the MaxRAMPercentage, which sets 80% of the limit of the container size, as Heap size.

There are other consequences, however:
As a consequence of those default settings: this may result in a huge young generation and a small old generation. In other words, MinHeapFreeRatio=10 and MaxHeapFreeRatio=20 result in a small old generation and a huge young generation where there is pressure to keep a small footprint, which is not optimal in some cases.

This side effect of the small footprint settings maybe be exacerbated by a small CPU setting (container.cpu limit) time for the GC, and as a consequence, the GC misses its throughput goal and increases Young Generation.

Example:

Heap:
 PSYoungGen      total 475648K, used 201915K [0x00000000e0000000, 0x0000000100000000, 0x0000000100000000) <------------ 475MB YOUNG
  eden space 427008K, 47% used [0x00000000e0000000,0x00000000ec52edc8,0x00000000fa100000)
  from space 48640K, 0% used [0x00000000fd080000,0x00000000fd080000,0x0000000100000000)
  to   space 48640K, 0% used [0x00000000fa100000,0x00000000fa100000,0x00000000fd080000)
 ParOldGen       total 224768K, used 179499K [0x00000000a0000000, 0x00000000adb80000, 0x00000000e0000000) <------------ 225MB OLD
  object space 224768K, 79% used [0x00000000a0000000,0x00000000aaf4ada0,0x00000000adb80000)
 Metaspace       used 175369K, capacity 185306K, committed 185944K, reserved 667648K
  class space    used 19196K, capacity 23105K, committed 23168K, reserved 503808K

In the example above, the Young Generation is about 500mb whereas the Old Generation is 250mb, so half of the size.

Alternatives

First, in terms of Max heap setting, depending on the use case, such as DataGrid or AMQ, having 80% of heap might be too high, so this can be reduced.
Second, regarding the size difference between old and young generations, there are a few alternatives for the behavior above including removal of the default settings. Alternatives may include changing the Min|MaxHeapFreeRatio but also could include replacing the ParallelGC with the usage of G1GC (or Shenandoah, in case of non-generational workload).

Finally, note that increasing the CPU (or ratio limit to request) can have two main impacts:

  • it might prevent throttling, by the OCP Node kernel
  • increasing the limit may allow the GC to behave better in terms of the generation dynamic allocation

Detail son Min and Max heap free ratio Content from docs.oracle.com is not included.here.

Note that this behavior (the old gen is just too large compared to the size of the young gen) may mask other issues or factors, so more such as the application not distributing the load on all PODS, or the requests coming in bringing different data. Therefore enabling GC logs with adequate Xlog settings for further investigation of the current behavior can be useful.

Article Type