How do I work around the 1024 file descriptor limit imposed on services launched by xinetd?
Environment
- Red Hat Enterprise Linux (RHEL) 4
- Red Hat Enterprise Linux 5
- Red Hat Enterprise Linux 6 (not effected)
Issue
xinetdlimits file descriptors to 1024autosysjob not picking up open file limit from/etc/security/limits.confxinetddoesn't honor the settings in/etc/security/limits.confat system boot time.
Resolution
Workaround 1
By setting the service to be started to /sbin/runuser and passing the actual service to be started in the parameters the limit can be circumvented by setting the desired limits in /etc/security/limits.conf.
This works as runuser makes use of PAM and thus limits.conf by extension.
An example of this is demonstrated below:
1. add a /etc/xinetd.d/test service:
service test
{
disable = no
socket_type = stream
wait = no
user = root
type = UNLISTED
port = 12345
server = /sbin/runuser
server_args = root - /tmp/test
}
2. add the service script /tmp/test (with 0777 permissions, just for the test):
#!/bin/bash
echo "Hello world, I am: " `id -un`
ulimit -a
I.e. the service only prints its UID and its limits and then terminates
connection.
3. start xinetd in debug mode:
# xinetd -d
It should spit some debugging output and it does not become a daemon.
4. in different terminal, query the service:
# nc localhost 12345
Hello world, I am: root
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1167
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited`
You can see here what limits the test script got: open files = 1024.
5. now, edit /etc/security/limits.conf and add or modify following lines:
root hard nofile 5555
root soft nofile 5555
6. go to step 3 - restart xinetd and run nc localhost 12345 (maybe several times, caches are bad), you should get different output:
...
open files (-n) 5555
Workaround 2
- Insert a ulimit statement in
/etc/rc.d/init.d/xinetdproviding the desired limits just before the daemon gets launched, for example:
...
unset HOME MAIL USER USERNAME
ulimit -n 65536
daemon $prog -stayalive -pidfile /var/run/xinetd.pid "$EXTRAOPTIONS"
...
Then restart xinetd
Root Cause
xinetd uses the select() system call and by default that’s limited to 1024 descriptors. This is determined by the FD_SETSIZE kernel constant. Short of modifying the xinetd binary there is no way to change this value in xinetd. As xinetd is not PAM aware limits.conf does not apply to services launched directly from xinetd.
The xinetd shipped with RHEL6 no longer makes use of the select() call, but has instead been changed to use poll() which is not affected by the FD_SETSIZE limitation.
Diagnostic Steps
I think what we are seeing is a consequence of the way that limits.conf is referenced by PAM aware applications. I have walked through a reproducible test to indicate how this relates to the behaviour you are seeing with xinetd.
The file limits.conf is used by PAM enabled services via the pam_limits.so module, which is in turn called by application specific configuration files in /etc/pam.d/. xinetd is not itself a PAM service but the daemons that it starts generally are. For example, an xinetd managed service such as telnet would typically invoke /bin/login which is PAM aware:
[root@test xinetd.d]# ldd /usr/sbin/in.telnetd
libutil.so.1 => /lib64/libutil.so.1 (0x0000002a9566c000)
libc.so.6 => /lib64/tls/libc.so.6 (0x0000002a9576f000)
/lib64/ld-linux-x86-64.so.2 (0x0000002a95556000)
[root@test xinetd.d]# ldd /bin/login
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00000036cfd00000)
libpam.so.0 => /lib64/libpam.so.0 (0x00000036cbd00000)
libdl.so.2 => /lib64/libdl.so.2 (0x00000036cab00000)
libpam_misc.so.0 => /lib64/libpam_misc.so.0 (0x00000036cb300000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00000036cd700000)
libaudit.so.0 => /lib64/libaudit.so.0 (0x00000036cbf00000)
libc.so.6 => /lib64/tls/libc.so.6 (0x00000036ca600000)
/lib64/ld-linux-x86-64.so.2 (0x00000036ca400000)
The limits.conf file is referenced via the PAM module pam_limits.so:
session required /lib/security/$ISA/pam_limits.so
This can be tested by commenting these lines from the PAM configuration files, invoking the application and then trying to set a higher limit (as non-root):
[root@server1 root]# cat /etc/security/limits.conf
..
* soft nofile 4096
* hard nofile 65536
# End of file
---------------------------------------------
(from remote machine)
[XX@XX ~]$ telnet server1
..
[test1@server1 test1]$ ulimit -n
1024
[test1@server1 test1]$ ulimit -n 4096
-bash: ulimit: open files: cannot modify limit: Operation not permitted
---------------------------------------------
xinetd however is not PAM aware and will not read limits.conf. The file descriptor limit will always have a maximum of 1024 due to the value of FD_SETSIZE as noted earlier. However the shell limits that xinetd spawns should be available to the spawned children. In this test a ulimit statement is inserted into /etc/rc.d/init.d/xinetd just before the daemon gets launched:
..
unset HOME MAIL USER USERNAME
ulimit -n 65536
daemon $prog -stayalive -pidfile /var/run/xinetd.pid "$EXTRAOPTIONS"
..
xinetd is then restarted and a telnet session is opened:
[XX@XX ~]$ telnet server1
..
[test1@server1 test1]$ ulimit -n
1024
[test1@server1 test1]$ ulimit -n 4096
[test1@server1 test1]$ ulimit -n
4096
Since the telnet daemon was spawned by the xinetd daemon which had the higher limit set before it launched, the higher limit was inherited so we could increase the 1024 default.
In summary, different limit behaviour will occur depending on whether the method used to invoke xinetd was pam aware or not (i.e. via init or via a service restart through login/ssh). So inserting a ulimit statement in the xinetd script could be a possible solution here.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.