Why does the JVM consume more memory than the amount given to -Xmx?

Solution Verified - Updated

Environment

  • Red Hat JBoss Enterprise Application Platform (EAP)
  • Red Hat JBoss Enterprise Portal Platform (EPP)
  • Red Hat JBoss Enterprise Web Server (EWS)
  • Red Hat JBoss Enterprise SOA Platform (SOA-P)
  • OpenJDK

Issue

  • We currently have JBoss instances with the JVM configured with the -Xmx2048m option. Despite that the output of the top command shows that each Java process is consuming over 3 GB of memory. Why is the Java instance consuming more than 2GB of memory?
  • If maximum heap size given for a JBoss is 2GB then why the same JBoss is consuming memory more than 2GB.
  • Why does our java process's memory usage (resident set size) exceed its configured max heap size?
  • How works the JBoss memory consumption?
  • Why is JBoss memory usage exceeding the MaxHeapSize?

Resolution


Disclaimer: Links contained herein to an external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.


Java 5, 6, & 7

The value of the -Xmx parameter controls the maximum size of the Java heap, which is not the only memory that the JVM allocates. In addition there is the Permanent Generation or Metaspace, CodeCache, native C++ heap used by the other JVM internals, space for the thread stacks (which is 1mb per thread by default on a 64 bit JVM in Java 1.6), direct byte buffers, GC overhead, and other things.

The memory used by a JVM process can be better computed as follows

        JvmProcessMemory = JvmHeap + PermGen|Metaspace + CodeCache + (ThreadStackSize* Number-of-Threads) + DirectByteBuffers + Jvm-native-c++-heap

The following formula determines approximately how many threads can be created:

        Number of Threads = (MaxProcessMemory - JvmHeap - PermGen|Metaspace - CodeCache - Jvm-native-c++-heap - ReservedOsMemory - OtherPrograms) /(ThreadStackSize)

The Java process consuming more memory than the amount given by -Xmx is perfectly normal.

To identify what in the JVM is allocating native memory segments, see How to identify JVM native memory segment allocations on OpenJDK / Oracle JDK.

Java 8, 11

Same computation as Java 5,6 & 7 except that PermGen is replaced by Content from dzone.com is not included.Metaspace. While PermGen was a special heap space separated from the main memory heap, Metaspace is a new and separated native memory space that grows automatically by default.

For specific details on how Metaspace works, see How does the JVM divide the Metaspace in the memory? and for Metaspace tuning details see JDK 8 Metaspace tuning for JBoss EAP.

One could use -XX:MaxRAM flag to limit the total memory usage by the JVM, the heap will be calculated accordingly and defaults to approximately 25% of the JVM total memory. In fact, the heap is the following ratio:

MaxHeapSize = MaxRAM * 1 / MaxRAMFraction

So given that MaxRAMFraction has default 4, the JVM allocates up to 25% of your RAM per JVM running on your machine. The MaxRAMFraction can be tuned as well, changing the ratio. As a corollary, the lower MaxRAMFraction, the more memory can be allocated to the heap.

Finally, for containers, there are other options as well, including InitialRAMPercentage and MaxRAMPercentage, starting in JDK 1.8.181).

Root Cause

If the Java process size is larger than expected there may be a native memory leak. See: Java application causes native memory leak.

In Openshift 4

For container usage in OCP 4, some metrics are scraped from the nodes' metric cadvisor that are read from cgroups files, so it is expected a discrepancy between the data reported by the JVM and the cadvisor information.
In other words, any large discrepancy won't be caused by a discrepancy on Xmx vs MaxRAM by the JVM, otherwise the OOM kill would already (cgroups will enforce the container limits). This doesn't mean the JVM is using more memory than the container limit, otherwise, this would directly trigger an cgroups OOMKill, similar to any process inside a container.
However, there are processes (and mechanisms that can count towards the memory consumption, see details and discussion on the article Java's memory consumption inside a Openshift 4 container.

Components
Category
Tags

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.