A-MQ raises "java.lang.OutOfMemoryError: Direct buffer memory" on broker startup
Environment
- JBoss A-MQ
- 6.x
Issue
We configure the broker to use the mKahaDB persistence store but during broker startup it raises this error to the Karaf shell.
No further errors are written to the container log file.
We use a larger number of kahaDB instance within our mKahaDB configuration (i.e. >50).
Exception in thread "ActiveMQ Data File Writer" Exception in thread "ActiveMQ Data File Writer" java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
at sun.nio.ch.IOUtil.write(IOUtil.java:58)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:205)
at org.apache.activemq.store.kahadb.disk.journal.Journal.doPreallocationZeros(Journal.java:270)
at org.apache.activemq.store.kahadb.disk.journal.Journal.preallocateEntireJournalDataFile(Journal.java:242)
at org.apache.activemq.store.kahadb.disk.journal.DataFileAppender.processQueue(DataFileAppender.java:320)
at org.apache.activemq.store.kahadb.disk.journal.DataFileAppender$1.run(DataFileAppender.java:193)
Subsequent broker restarts don't show this error but corruptions in various kahadb instances is reported.
Resolution
Configuring for a high number of KahaDB instances within one mKahaDB configuration is unusual.
The idea of mKahaDB is to partition different message use cases (i.e. high volume fast consumption vs. low volume slow consumption) to different kahaDB instances. 2-5 kahaDB instances withing mKahaDB are typically sufficient.
Reducing the number of KahaDB instances will reduce the memory requirement as less byte buffers are needed at broker startup time. This would be the recommended solution.
Alternatively if reducing the number of KahaDB instances is not an option, a higher JVM max heap size will be needed.
E.g. using an mKahaDB configuration with up to 100 KahaDB instances requires at least 5GB of JVM heap to startup correctly.
Root Cause
On a fresh restart of A-MQ (i.e. no mKahaDB written to the file system yet), mKahaDB needs to initialize all the configured KahaDB instances. For that it creates a larger byte buffer in memory for every KahaDB instance that is then written to the file system.
If there are many KahaDB instances configured within the mKahaDB persistence adapter, then many of these larger byte buffers need to get created. These in total can exceed the JVM heap size.
Diagnostic Steps
If there is not enough JVM heap available to hold all the byte buffers, the above error will be raised and some KahaDB instances will be corrupted. Subsequent restarts of the broker are likely to raise the following error on at least some KahaDB instances during the broker startup and KahaDB recovery phase.
15:36:42,244 | INFO | pool-12-thread-1 | Journal | 184 - org.apache.activemq.activemq-osgi - 5.11.0.redhat-620133 |
ignoring zero length, partially initialised journal data file: db-1.log number = 1 , length = 0
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.