JVM crash on 64-bit Linux due to native memory allocation failure
Environment
- Red Hat Enterprise Linux (RHEL)
- OpenJDK
- Red Hat build of OpenJDK
- Oracle JDK
- 64-bit
Issue
- One of the following in the fatal error log:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 12345 bytes for char in /path/to/jdk7u21/hotspot/src/share/vm/gc_implementation/g1/sparsePRT.cpp
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1234567 bytes for Chunk::new
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# JVM is running with Unscaled Compressed Oops mode in which the Java heap is
# placed in the first 4GB address space. The Java Heap base address is the
# maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress
# to set the Java Heap base and to place the Java Heap above 4GB virtual address.
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2798), pid=4233, tid=0x00007f0077874700
#
- Moving to Red Hat Enterprise Linux (RHEL) 8, the JVM crashes with the following in the fatal error log:
Native memory allocation (mmap) failed to map 1234567 bytes for committing reserved memory
- I have a JBoss EAP server currently running in RHEL 8.1 and once per day suddenly crashes. It stops with no error. The
server.logsays nothing, just stops logging.
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2805), pid=580, tid=586
#
Resolution
The following are possible resolutions:
- Reduce the JVM memory needs (e.g. decrease the heap size).
- Eliminate/slim other processes outside the JVM competing for memory.
- Increase available physical memory.
- Explicitly set huge pages (do not rely on transparent hugepages). See: Do I need to configure hugepages for JBoss EAP on RHEL 6 even though transparent hugepages in RHEL 6 are fully automatic?.
- Do hypervisor tuning to ensure memory swapping does not happen (e.g. reserve virtual machine memory).
- Increase or disable the applicable resource limit (rlimit).
Root Cause
There is not enough contiguous address space in native memory for the allocation or the allocation fails due to hitting a limit or system memory exhaustion.
In RHEL 8, a new data segment ulimit will need to be determined for each application. It should be the previous ulimit -d value plus the largest size of a mmap() that the application will utilize. See: Calls to mmap() returning allocation failure (ENOMEM) on RHEL 8.
Diagnostic Steps
Check if memory exhaustion is due solely to the JVM, or an external process is consuming significant amounts of memory, or it is due to the hypervisor ballooning memory:
- Check processes consuming memory:
ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
-
Use the script attached to the following knowledgebase solution that captures
pmapoutput every hour that will allow seeing the Java and external process sizes over time: Java application causes native memory leak. -
Is the operating-system being run on a virtual machine?
-
Is there memory overcommitting? What is vm.overcommit_memory parameter?
-
Is there enough memory reserved for the virtual machine to prevent swapping?
-
Check memory at the time of the crash (MemFree, MemAvailable) and operating-system/virtual-machine overcommit settings. If there is plenty of memory for the allocation, check to see if a limit is being hit.
-
If RHEL 8, check if a
datalimit is being set in/etc/security/limits.conf, and test with thedatalimit disabled:# echo 1 > /sys/module/kernel/parameters/ignore_rlimit_data
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.