High Per-Thread Memory Usage by Netty / XNIO in AMQ Broker (Embedded or Standalone)

Solution Unverified - Updated

Environment

Red Hat A-MQ 7.0 or higher
JBoss EAP 7.0 or higher

Issue

AMQ broker-client-threads seem to maintain large (16mb) buffer caches at the thread-local level, resulting in higher io.netty.buffer.PoolChunk overhead. This is as-designed, but results in large amounts of memory usage in environments with lots of threading.

Resolution

The following JVM / System flags are available to tune thread-level buffering behavior:

  1. io.netty.allocator.maxOrder: use 8 instead of the default (11); this will reduce the heap arena size from 16 MB to 2 MB
    (io.netty.allocator.pageSize << io.netty.allocator.maxOrder with default io.netty.allocator.pageSize = 8192 bytes)
  2. io.netty.allocator.numHeapArenas: use 1 or 2 instead of the default value (ie availableProcessors * 2)

Reducing the default values of these two system properties should reduce memory consumption for thread-local buffering.

Root Cause

By default, netty / xnio maintains a large (16mb) reusable buffer pool. In environments with high concurrency / large thread pools, this can lead to unexpected levels of memory consumption without obvious memory leakage.

Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.