Interpreting VM.info file in OpenJDK/OracleJDK

Solution Verified - Updated

Environment

  • OpenJDK 11, 17
  • Oracle JDK 8, 11, 17

Issue

  • What is the VM.info?
  • How to interpret it?

Resolution

To get the VM.info do jcmd PID VM.info

$ jcmd $PID VM.info

The $PID means the Java process ID, which will vary from container to container. But in OpenJDK straight deployment is usually PID 1.

For RSS and Virtual memory

For RSS and Virtual memory see:

$ jcmd NativeExample VM.info | grep Resident
Resident Set Size: 27012K (peak: 27012K) (anon: 10808K, file: 16204K, shmem: 0K) <-------------------- Resident
[fdemeloj@fdemeloj 03348157]$ jcmd NativeExample VM.info | grep Virtual
Virtual space:
Virtual Size: 6938188K (peak: 15530072K) <------ Virtual

Native tracking

Inside the VM.info it is possible to see the native tracking:

Native Memory Tracking:
Total: reserved=5569059KB, committed=396795KB
-                 Java Heap (reserved=3977216KB, committed=251904KB)
                            (mmap: reserved=3977216KB, committed=251904KB) 
 
-                     Class (reserved=1056888KB, committed=4984KB)
                            (classes #556)
                            (  instance classes #474, array classes #82)
                            (malloc=120KB #605) 
                            (mmap: reserved=1056768KB, committed=4864KB) 
                            (  Metadata:   )
                            (    reserved=8192KB, committed=4352KB)
                            (    used=238KB)
                            (    free=4114KB)
                            (    waste=0KB =0.00%)
                            (  Class space:)
                            (    reserved=1048576KB, committed=512KB)
                            (    used=17KB)
                            (    free=495KB)
                            (    waste=0KB =0.00%)
 
-                    Thread (reserved=17525KB, committed=861KB)
                            (thread #17)
                            (stack: reserved=17448KB, committed=784KB)
                            (malloc=59KB #104) 
                            (arena=18KB #32)
 
-                      Code (reserved=247725KB, committed=7585KB)
                            (malloc=37KB #429) 
                            (mmap: reserved=247688KB, committed=7548KB) 
 
-                        GC (reserved=199184KB, committed=60940KB)
                            (malloc=17820KB #2014) 
                            (mmap: reserved=181364KB, committed=43120KB) 
 
-                  Compiler (reserved=135KB, committed=135KB)
                            (malloc=3KB #43) 
                            (arena=133KB #5)
 
-                  Internal (reserved=559KB, committed=559KB)
                            (malloc=527KB #952) 
                            (mmap: reserved=32KB, committed=32KB) 
 
-                     Other (reserved=51200KB, committed=51200KB) <------------------------------------ 51200 KB == 51.2Mb <---- evidence a
                            (malloc=51200KB #1) 
 
-                    Symbol (reserved=1041KB, committed=1041KB)
                            (malloc=681KB #64) 
                            (arena=360KB #1)
-    Native Memory Tracking (reserved=301KB, committed=301KB)
                            (malloc=168KB #2381) 
                            (tracking overhead=133KB)

See the solution JVM Native Memory Tracking for native tracking details.

GC Heap History (20 events):

Event: 406983.623 GC heap before
{Heap before GC invocations=3328 (full 0):
 garbage-first heap   total 16777216K, used 12453732K [0x0000000400000000, 0x0000000800000000)
  region size 16384K, 614 young (10059776K), 1 survivors (16384K)
 Metaspace       used 82979K, capacity 86205K, committed 86476K, reserved 165888K
  class space    used 9564K, capacity 10843K, committed 10956K, reserved 90112K
}
Event: 406983.631 GC heap after
{Heap after GC invocations=3329 (full 0):
 garbage-first heap   total 16777216K, used 2426808K [0x0000000400000000, 0x0000000800000000)
  region size 16384K, 1 young (16384K), 1 survivors (16384K)
 Metaspace       used 82979K, capacity 86205K, committed 86476K, reserved 165888K
  class space    used 9564K, capacity 10843K, committed 10956K, reserved 90112K
}

Dynamic libraries:

See the allocation of the dynamic libraries

Cgroups information is at the bottom

Above one see the cgroups, which is the kernel feature (on container aware jdk) that enables the container to limit and detect specific memory usage for each container:

container (cgroup) information:
container_type: cgroupv1
cpu_cpuset_cpus: 0-127
cpu_memory_nodes: 0-7
active_processor_count: 1
cpu_quota: -1
cpu_period: 100000
cpu_shares: 51
memory_limit_in_bytes: 536870912
memory_and_swap_limit_in_bytes: 536870912
memory_soft_limit_in_bytes: -1
memory_usage_in_bytes: 125108224
memory_max_usage_in_bytes: 126193664

Example with 2.2 cpus - with pod with : containers.resources.limits.cpu: 2.2 and containers.resources.limits.memory: 12Gi:

container (cgroup) information:
container_type: cgroupv1
cpu_cpuset_cpus: 0-3
cpu_memory_nodes: 0
active_processor_count: 3 <---------------- process count
cpu_quota: 220000 <------------------------ 2.2 cpus
cpu_period: 100000
cpu_shares: no shares
memory_limit_in_bytes: 12582912 k <-------- 12gb limit
memory_and_swap_limit_in_bytes: 12582912 k
memory_soft_limit_in_bytes: unlimited
memory_usage_in_bytes: 758948 k
memory_max_usage_in_bytes: 759060 k
kernel_memory_usage_in_bytes: 10624 k
kernel_memory_max_usage_in_bytes: unlimited
kernel_memory_limit_in_bytes: 10628 k
maximum number of tasks: 98938
current number of tasks: 98

Another example:

### Memory hard limit, swap (memory included), and soft limit:
$ podman run --memory-reservation=700m --memory=500m --memory-swap=1000m --rm -it localhost/openjdk:app
memory_limit_in_bytes: 512000 k <------------------- memory 500mb
memory_and_swap_limit_in_bytes: 1024000 k <--------- swap and memory 1000mb
memory_soft_limit_in_bytes: 716800 k <-------------- soft limit 700mb
memory_usage_in_bytes: 120324 k
memory_max_usage_in_bytes: not supported
memory_swap_current_in_bytes: unlimited
memory_swap_max_limit_in_bytes: 512000 k
...
...
$ podman stats
ID            NAME              CPU %       MEM USAGE / LIMIT  MEM %       NET IO       BLOCK IO      PIDS        CPU TIME    AVG CPU %
e6d5ec05dd7e  vigorous_wescoff  0.20%       97.26MB / 524.3MB  18.55%      330B / 430B  0B / 720.9kB  42          6.045894s   0.69%
SettingMeaningpodman setting
memory_limit_in_bytesmemory hard limitset via --memory in podman
memory_and_swap_limit_in_bytesswap and memoryset via --memory-swap in podman
memory_soft_limit_in_bytessoft limitset via --memory-reservation in podman
memory_usage_in_bytesCurrent usageN/A (comes from the JVM usage)

KVM details

KVM virtualization detected
Steal ticks since vm start: 194
Steal ticks percentage since vm start:  0.000

Above the KVM virtualization that there is Cpu steal time, which is when the Virtualization host or hypervisor do not schedule the virtualized cpu or in other words steal time. A tick is really small. The size depends on the cpu for a 2.0Ghz cpu is like 2e+9 ticks per sec or more, given 2,000,000,000 instructions per second or so.
A concerning number depends on what is being run, verify for 100k+ is probably a lot if they are running a lot of high priority or RT processes. For instance several million or billion would be a lot in most cases.

JDK 17

JDK 17 brings GC details/information about the metaspace (which on JDK 17 has a buddy algorithm) and GC precious log details:

### Metaspace
MaxMetaspaceSize: 96.00 MB <----------------------------- Max Metaspace size
CompressedClassSpaceSize: 80.00 MB <--------------------- compressed class space size == CCSS
Initial GC threshold: 32.00 MB <------------------------- initial GC threshold
Current GC threshold: 65.25 MB  <------------------------ current GC threshold
CDS: on <------------------------------------------------ Class Data Sharing 
MetaspaceReclaimPolicy: balanced <----------------------- Metaspace Reclaim Policy: balanced
 - commit_granule_bytes: 65536.
 - commit_granule_words: 8192.
 - virtual_space_node_default_size: 1048576.
 - enlarge_chunks_in_place: 1.
 - new_chunks_are_fully_committed: 0.
 - uncommit_free_chunks: 1.
 - use_allocation_guard: 0.
 - handle_deallocations: 1.
###
### GC details
GC Precious Log:
 CPUs: 4 total, 2 available <----------------------------- cgroup cpu total and available
 Memory: 3072M <------------------------------------------ cgroups memory size
 Large Page Support: Disabled <--------------------------- Large Page support
 NUMA Support: Disabled <--------------------------------- Numa
 Compressed Oops: Enabled (32-bit)
 Heap Region Size: 1M <----------------------------------- heap region size
 Heap Min Capacity: 8M
 Heap Initial Capacity: 48M <----------------------------- Initial capacity
 Heap Max Capacity: 1536M <------------------------------- Max Heap size
 Pre-touch: Disabled <------------------------------------ Pre-touch
 Parallel Workers: 2 <------------------------------------ Parallel GC workers
 Concurrent Workers: 1  <--------------------------------- Concurrent workers
 Concurrent Refinement Workers: 2
 Periodic GC: Disabled <---------------------------------- Periodic GC for G1GC introduced on JDK 17

Native trimming:

VM.info has a flag to verify the native trimming enablement:

Periodic native trim disabled
---------------  S Y S T E M  ---------------

To enable it use -XX:TrimNativeHeapInterval=<ms> this enables the native trimming, example (custom resource):

  container:
    extraJvmOpts: '-XX:+UnlockExperimentalVMOptions  -XX:TrimNativeHeapInterval=100'

This will result in:

Periodic native trim enabled (interval: 100 ms)
Trims performed: 1866, current suspend count: 0, stopped: 0

Root Cause

VM.info is useful for RSS/Virtual memory usage, cgroups, threads, gc history, and dynamic shared libraries usage - including 3rd party libraries. And has more information than Native tracking information so RSS usage can be tracked using it. See the solution JVM Native Memory Tracking for native tracking details. For Openshift, see Java's memory consumption inside a Openshift 4 container.

Note the Process Memory section is not present on OpenJDK 11.0.9 given it was introduced on OpenJDK 11.0.12.

ValueMeaning
Resident Set Size (RSS)total process size consuming system memory
Virtual Size (VSZ)Shows some memory that is mapped but not yet allocated

Note that /proc/meminfo is not container aware.

Diagnostic Steps

  1. To get VM.info: jcmd PID VM.info
  2. To see native allocation see RSS in VM.info
  3. To use it inside a pod, the image must have jcmd tool.
  4. Set NMT (summary or details) for getting Native memory details from the pod's JVM.
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.