High memory requirements on Satellite or Capsule when sending SCAP reports
Environment
- Red Hat Satellite/Capsule 6
Issue
-
Satellite or Capsule encounters high memory demands or even hits OOM killer activity.
-
Around that time, Clients were sending OpenSCAP reports in bulk to the Satellite/Capsule
Resolution
Ideally, deploy the SCAP client using the ansible role (or the puppet module) in Satellite, as it already does add some splay.
If that is not possible, try to split the bulk activity of generating+sending SCAP reports into multiple chunks, or span the activity to a longer period of time. Particular options to achieve that:
- duplicate Satellite's OpenSCAP Policy, change its schedule and split the systems across both Policies
- move some Hosts to a different Capsule that does not handle OpenSCAP reports
For more KB articles/solutions related to Red Hat Satellite 6.x OpenSCAP Issues, please refer to the Red Hat Satellite Consolidated Troubleshooting Article for Red Hat Satellite 6.x OpenSCAP Issues
Root Cause
Each OpenSCAP client report is being processed by a dedicated new process spawned from smart-proxy. Memory consumption of such process is often 0.5GB+ and takes tens of seconds to complete. Therefore, memory demands of concurrent bulk SCAP report processing can easily require (tens of) gigabytes of memory.
Diagnostic Steps
-
OOM killer killing process either (cryptic)
diagnostic_con*or directlysmart-proxy:Dec 6 01:13:45 nz11rsat001v kernel: Out of memory: Killed process 1003621 (diagnostic_con*) total-vm:5291728kB, anon-rss:3496688kB, file-rss:128kB, shmem-rss:0kB, UID:993 pgtables:9452kB oom_score_adj:0 -
/var/log/foreman-proxy/proxy.loghas tens to hundreds of below requests within a short time:2025-12-13T01:05:01 7ce63ffd [I] Started POST /compliance/arf/2 2025-12-13T01:05:01 adb37c5c [I] Started POST /compliance/arf/2 2025-12-13T01:05:04 50f0d47e [I] Started POST /compliance/arf/2 2025-12-13T01:05:08 5d45232d [I] Started POST /compliance/arf/2 2025-12-13T01:05:09 eff6c219 [I] Started POST /compliance/arf/1 2025-12-13T01:05:11 c68d16df [I] Started POST /compliance/arf/1 .. -
psoutput (caught at the "busy times") shows tens to hundreds ofsmart-proxyprocesses (usually there is just one):foreman+ 1172374 0.0 1.9 875096 636568 ? - Dec09 3:39 /usr/bin/ruby /usr/share/foreman-proxy/bin/smart-proxy foreman+ 1399229 3.6 10.4 3771708 3412372 ? - 01:06 0:08 /usr/bin/ruby /usr/share/foreman-proxy/bin/smart-proxy foreman+ 1399275 4.1 8.5 3004536 2784488 ? - 01:07 0:09 /usr/bin/ruby /usr/share/foreman-proxy/bin/smart-proxy foreman+ 1399320 2.7 6.6 2590872 2177096 ? - 01:07 0:06 /usr/bin/ruby /usr/share/foreman-proxy/bin/smart-proxy .. -
pstreeshows the processes were spawned from the mainsmart-proxyand they sometimes callbunzip:|-smart-proxy(1172374)-+-diagnostic_con*(1399229) | |-diagnostic_con*(1399275) | |-diagnostic_con*(1399320) | |-diagnostic_con*(1399340) | |-diagnostic_con*(1399447)-+-bunzip2(1399448) | | |-{diagnostic_con*}(1399449) | | |-{diagnostic_con*}(1399450) | | `-{diagnostic_con*}(1399451) | |-diagnostic_con*(1400388) | |-diagnostic_con*(1400486) | |-diagnostic_con*(1400860)-+-bunzip2(1400861) | | |-{diagnostic_con*}(1400863) | | |-{diagnostic_con*}(1400864) | | `-{diagnostic_con*}(1400865) ..
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.