Java Application "java.lang.OutOfMemoryError: Direct buffer memory"
Environment
- Java
- OpenJDK
- Oracle JDK
Issue
java.lang.OutOfMemoryError: Direct buffer memoryin the JBoss server log
- One of the slave machines gets disconnected from the Domain Controller and starts throwing this error:
[Host Controller] 16:36:41,136 ERROR [org.xnio.listener] (management I/O-2) XNIO001007: A channel event listener threw an exception: java.lang.OutOfMemoryError: Direct buffer memory
Resolution
-
Ensure there is ample memory available on the system, capable of handling memory usage from the JVM in excess of your established heap/perm size. The NIO direct buffers are allocated to native memory space outside of the JVM's established heap/perm gens. If this memory space outside of heap/perm is exhausted, this OOME will be throw.
-
For preventing the same OOM, consider using -XX:MaxDirectMemorySize. (e.g. XX:MaxDirectMemorySize=4294967296) This option specifies the maximum total size of java.nio (New I/O package) direct buffer allocations. The default value is zero, which means the maximum direct memory is set by the JVM and can be determined by inspecting sun.misc.VM.directMemory in a heap dump.
-
As a workaround, consider disabling direct buffer usage in the JBoss IO subsystem:
<subsystem xmlns="urn:jboss:domain:io:3.0"> <worker name="default"/> <buffer-pool name="default" direct-buffers="false"/> </subsystem>
If the issue is related to a lack of regular, full GCs:
-
Force regular, manual GCs, for example using
jcmd:$ jcmd <pid> GC.run -
Change to a garbage collector that is more aggressive managing jvm footprint size (done with full collections), like the parallel collector. This is a container (e.g OpenShift) use case, but it would work well here.
You could use the JVM Config Tool and select "OpenShift" and "Parallel" and set a minimum heap setting below the observed retention (e.g.-Xms32m): JVM Options Configuration Tool. -
Decrease the heap size to trigger more frequent GC.
Root Cause
- DirectByteBuffer native memory leak caused by remoting issue
- OutOfMemoryError Direct buffer memory in EAP 6.0.1 HornetQ
- A-MQ raises "java.lang.OutOfMemoryError: Direct buffer memory" on broker startup
- Buffer leak when handling "Expect:100-continue" requests causes "OutOfMemoryError: Direct buffer memory"
- HornetQ throwing an OutOfMemoryException
- HotSpot leaking memory in long-running requests
- Large DirectByteBuffer overhead in Util$BufferCache
- ActiveMQ 6.3 runs out of direct buffer memory under load
- JBoss Controller reaches 'OutOfMemoryError: Direct buffer memory' after many connection retries
- DirectByteBuffer native memory leak caused by remoting issue XNIO-374
- DirectByteBuffer leak caused by SSL engines not closing in XNIO
- EAP 7 hits Direct buffer OOME with large response headers on HTTP2
- CVE-2021-3690: Undertow buffer leak on incoming websocket PONG message may lead to DoS
- High Per-Thread Memory Usage by Netty / XNIO in AMQ Broker (Embedded or Standalone)
- Java heap and direct buffers are consumed by thousands of remoting connections created by client EJB calls
- JBoss - Leak in io.undertow.server.DefaultByteBufferPool on EAP 7.4.16+
Diagnostic Steps
-
Check the heap to see if
java.nio.Bits.totalCapacityequals the-XX:MaxDirectMemorySize=Nsetting. -
To view the
java.nio.DirectByteBufferobjects using native memory:SELECT d.capacity FROM java.nio.DirectByteBuffer d WHERE (d.cleaner != null) -
Test disabling direct buffer usage in the JBoss IO subsystem:
<subsystem xmlns="urn:jboss:domain:io:3.0"> <worker name="default"/> <buffer-pool name="default" direct-buffers="false"/> </subsystem>Note that this will increase Java heap usage.
-
Check GC logging leading up to the issue to see if there are regular, full GCs.
-
The following is required to reclaim DirectByteBuffer memory:
- DirectByteBuffer becomes phantom-reachable.
- Garbage collection is performed (in separate thread), DirectByteBuffer Java object is collected and an entry is added to the ReferenceQueue.
- Cleaner thread reaches this entry and runs the registered clean-up action (in this case, it's
java.nio.DirectByteBuffer.Deallocatorobject), this action finally frees the native memory.
Therefore, if the heap usage is very low and collections are too infrequent, DirectByteBuffer memory will not be reclaimed.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.