Java application causes native memory leak
Environment
- Red Hat JBoss Enterprise Application Platform (EAP)
- JBoss Enterprise Web Server (EWS) Tomcat
- Red Hat Enterprise Linux (RHEL) Tomcat
- Red Hat AMQ
Issue
- We are seeing at the OS level a slow memory leak over time.
- The heap utilization stays at 4GB yet the physical memory use will rise from 4GB to 10GB over the course of the week.
- The JVM size is growing large enough that it is killed by OOM killer.
- The Java process size is much larger than expected.
Resolution
See the resolution for the relevant root cause.
Root Cause
- Memory allocated by Java libraries, C/C++ libraries, and any application native methods.
- Java runs out of memory due to solr memory leak
- Java native memory leak deploying thousands of JSP files every few minutes
- Reopening the same file (such as an EAR) over and over through application code
- Virtual file system (VFS) native memory leak in JBoss EAP 5
- HotSpot leaking memory in long-running requests
- This content is not included.Memory leak in ovirt-engine after upgrading to RHEL 6.5 and java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
- Java application using PKCS11 provider leaks memory from Java_sun_security_pkcs11_wrapper_PKCS11_C_1DeriveKey
- Java process size grows and leaks memory after updating OpenJDK on RHEL 7
- JBoss native memory leak with mutual authentication on RHEL6 with OpenSSL
- Slow native memory leak over time of OpenJDK when G1 GC is used
- Application uses a lot of memory under RHEL 6
- java.util.zip.(Deflater|Inflater|ZipFile) native memory leak
- PKI Tomcat java process continues to grow over time
- DirectByteBuffer
- Java Application "java.lang.OutOfMemoryError: Direct buffer memory"
- Unsafe.allocateMemory , Unsafe.reallocateMemory
- Netty 4.x high memory usage
- OutOfDirectMemoryError raised from Netty
- "java.lang.OutOfMemoryError: Direct buffer memory” exception is thrown by Netty component
- Memory leak in invoker.c invoker_completeInvokeRequest() during JDI operations
- Leak in io.undertow.server.DefaultByteBufferPool on EAP 7.4.16 and 7.4.17
Diagnostic Steps
Quantify how large the Java process is growing:
-
Capture OS level data to show the Java process memory increase over time. For example:
top -b -d 3600 -H >> top.outThis will capture
topoutput every hour in a file called top.out. -
On Linux, collect a series of
pmapoutput over time. Run the attached pmap_linux.sh script, passing in the JBoss PID as an argument. For example:sh ./pmap_linux.sh JBOSS_PIDThe script will capture
pmapoutput every hour in a file called pmap.out. As the process memory grows beyond the expected JVM process size the leak will become more prominent in the output.Be sure to test the script before using to make sure it runs properly in your environment.
Verify the Java process size is much larger than expected:
-
Review JVM options in boot.log, run.conf, or wherever they are defined. Determine the expected maximum process size based on the JVM options. See Why does the JVM consume more memory than the amount given to -Xmx?.
-
Review garbage collection logging from a time when the issue happens and compare the maximum allocation and usage to the Java process memory consumption reported by the OS. Is there a large discrepancy?
-
Capture Native Memory Tracking (NMT) data. See How to identify JVM native memory segment allocations.
Heap analysis:
- Get a heap dump when the process size is very large and look for known objects that allocate native memory (e.g. java.nio.ByteBuffer with direct access):
- How do I create a Java heap dump?
- How do I analyze a Java heap dump?
- Check the maximum amount of direct memory that can be allocated by inspecting
sun.misc.VM.directMemory. - View the java.nio.DirectByteBuffer objects using native memory:
SELECT d.capacity FROM java.nio.DirectByteBuffer d WHERE (d.cleaner != null)
- Determine the amount of native memory that can be reclaimed when the Cleaner queue runs:
SELECT c.capacity FROM OBJECTS ( SELECT OBJECTS referent FROM INSTANCEOF sun.misc.Cleaner ) c WHERE (c.capacity != null)- Export as txt file (top menu item)
- Open in LibreOffice and sum
Understand the environment to see if it matches any known issues:
- Is it a new deployment or an application that has been running fine a long time and now has an issue.
- If it has been running a long time in production, what has changed recently (e.g. OS, JDK, application)?
- Can the issue be reproduced predictably and consistently?
- Can functionality be removed until the issue is not reproduced to narrow down the issue?
- Review environment information in
boot.log. - Is Java being run in a virtual environment in a guest OS or on a physical operating system?
- Are there any Java JNI native components being used (e.g. the APR native connectors)? Can the issue be reproduced with the JNI components removed?
- Set the -verbose:jni flag for more details on JNI calls that could be related to the issue.
- Are a lot of JSPs being redeployed many times over?
- Are deployments in a compressed format? If so, try unzipped deployments instead.
- Gather thread dumps when the process size is large and check for unusual amounts of threads eating up space with their thread stacks:
If the issue is reproducible in a test environment, it may be possible to use valgrind on Red hat Enterprise Linux (RHEL) and OpenJDK:
-
Subscribe to the RHEL Server Debuginfo channel.
-
Install the relevant debuginfo packages:
yum install glibc-debuginfo java-1.6.0-openjdk-debuginfo -
This content is not included.This content is not included.http://www.redhat.com/magazine/015jan06/features/valgrind/
-
See How to identify JVM native memory segment allocations for information on using
valgrind.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.