Discussion:
Increasing maximum number of open files
(too old to reply)
Harold Johanssen
2019-11-15 15:30:35 UTC
Permalink
How does one increase the maximum number of open files, for all
processes, and without rebooting? An online search reveals a number of
solutions, but they all seem to be based on systemd.
Rich
2019-11-15 16:08:07 UTC
Permalink
Post by Harold Johanssen
How does one increase the maximum number of open files, for all
processes, and without rebooting? An online search reveals a number of
solutions, but they all seem to be based on systemd.
Documentation:

/usr/src/linux/Documentation/sysctl/fs.txt

look for 'file-max' within that file.

the proc file that allows tuning is:

/proc/sys/fs/file-max

Note this changes the kernel's limits, there may be other limits
imposed by libc, pam (if used), and/or your shell login setup.
Changing those (if they exist) are 'way' too distro specific for me to
give any hints (in this case, duckduckgo/google are your friends).
Jens Stuckelberger
2019-11-15 17:35:47 UTC
Permalink
Post by Rich
Post by Harold Johanssen
How does one increase the maximum number of open files, for all
processes, and without rebooting? An online search reveals a number of
solutions, but they all seem to be based on systemd.
/usr/src/linux/Documentation/sysctl/fs.txt
look for 'file-max' within that file.
/proc/sys/fs/file-max
Note this changes the kernel's limits, there may be other limits imposed
by libc, pam (if used), and/or your shell login setup. Changing those
(if they exist) are 'way' too distro specific for me to give any hints
(in this case, duckduckgo/google are your friends).
I asked in this group because I need to do this in Slackware.
There is plenty of information online on this issue for other
distributions, but not so much for Slackware.
Rich
2019-11-15 17:43:21 UTC
Permalink
Post by Jens Stuckelberger
Post by Rich
Post by Harold Johanssen
How does one increase the maximum number of open files, for all
processes, and without rebooting? An online search reveals a number of
solutions, but they all seem to be based on systemd.
/usr/src/linux/Documentation/sysctl/fs.txt
look for 'file-max' within that file.
/proc/sys/fs/file-max
Note this changes the kernel's limits, there may be other limits imposed
by libc, pam (if used), and/or your shell login setup. Changing those
(if they exist) are 'way' too distro specific for me to give any hints
(in this case, duckduckgo/google are your friends).
I asked in this group because I need to do this in Slackware.
There is plenty of information online on this issue for other
distributions, but not so much for Slackware.
In that case, the proc setting is the kernel global.

But individual processes are limited to the value set when you run
"ulimit -a" inside a bash shell.

If you always want a higher number per process as a default, you'll
have to setup to raise the hard limit within one of the rc files before
a user login occurs.

If you just want a higher per process limit temporarlly, then su to
root, adjust the ulimit hard value (man bash, search for 'ulimit'),
then launch the process that needs more open files from that shell
(su'ing to another user if it needs to run as another user first).
Harold Johanssen
2019-11-15 19:29:23 UTC
Permalink
Post by Rich
Post by Jens Stuckelberger
I asked in this group because I need to do this in Slackware.
There is plenty of information online on this issue for other
distributions, but not so much for Slackware.
In that case, the proc setting is the kernel global.
But individual processes are limited to the value set when you run
"ulimit -a" inside a bash shell.
If you always want a higher number per process as a default, you'll have
to setup to raise the hard limit within one of the rc files before a
user login occurs.
If you just want a higher per process limit temporarlly, then su to
root, adjust the ulimit hard value (man bash, search for 'ulimit'), then
launch the process that needs more open files from that shell (su'ing to
another user if it needs to run as another user first).
Thanks. Actually, found something that meets my immediate
requirements better. The following command (issued as superuser) will
increment the file descriptors limits (hard and soft) for a running
process:

prlimit --nofile=16384:16384 --pid <process-id>
Henrik Carlqvist
2019-11-21 07:09:58 UTC
Permalink
Post by Harold Johanssen
The following command (issued as superuser) will
increment the file descriptors limits (hard and soft) for a running
prlimit --nofile=16384:16384 --pid <process-id>
Just out of curiosity: Why does that process need more than default 1024
simultaneously open files? Could it be that it has a resource leak not
closing files it has opened once it is done with the files?

regards Henrik
Harold Johanssen
2019-11-21 13:31:47 UTC
Permalink
Post by Henrik Carlqvist
The following command (issued as superuser) will increment the file
prlimit --nofile=16384:16384 --pid <process-id>
Just out of curiosity: Why does that process need more than default 1024
simultaneously open files? Could it be that it has a resource leak not
closing files it has opened once it is done with the files?
It's a process that spawns multiple threads, that in turn spawn
more threads, which will open and close the files. I am reasonably sure
that all the files descriptors previously opened are closed as they
should.
Rich
2019-11-21 18:32:08 UTC
Permalink
Post by Harold Johanssen
Post by Henrik Carlqvist
The following command (issued as superuser) will increment the file
prlimit --nofile=16384:16384 --pid <process-id>
Just out of curiosity: Why does that process need more than default
1024 simultaneously open files? Could it be that it has a resource
leak not closing files it has opened once it is done with the files?
It's a process that spawns multiple threads, that in turn
spawn more threads, which will open and close the files. I am
reasonably sure that all the files descriptors previously opened are
closed as they should.
The only time any of my own code has even come anywhere close to the
standard per process 1024 file descriptor limit was when I
inadvertently created a descriptor leak (i.e., forgot to close things I
opened in a loop).

What kind of work is this process doing that it actually needs (and is
actually using, instead of leaking) more than 1024 simultaneous open
files?
Harold Johanssen
2019-11-21 21:16:31 UTC
Permalink
Post by Rich
Post by Harold Johanssen
Post by Henrik Carlqvist
The following command (issued as superuser) will increment the file
prlimit --nofile=16384:16384 --pid <process-id>
Just out of curiosity: Why does that process need more than default
1024 simultaneously open files? Could it be that it has a resource
leak not closing files it has opened once it is done with the files?
It's a process that spawns multiple threads, that in turn
spawn more threads, which will open and close the files. I am
reasonably sure that all the files descriptors previously opened are
closed as they should.
The only time any of my own code has even come anywhere close to the
standard per process 1024 file descriptor limit was when I inadvertently
created a descriptor leak (i.e., forgot to close things I opened in a
loop).
What kind of work is this process doing that it actually needs (and is
actually using, instead of leaking) more than 1024 simultaneous open
files?
In essence, the threads talk to a database, every thread opening
a connection to it for this purpose. Each active connection consumes a
descriptor. For the most part, 1024 of them are enough, but occasionally
the database has a difficult time keeping up, which is when more 1024
file descriptors are required.
Henrik Carlqvist
2019-11-21 21:33:10 UTC
Permalink
Post by Harold Johanssen
In essence, the threads talk to a database, every thread opening
a connection to it for this purpose. Each active connection consumes a
descriptor. For the most part, 1024 of them are enough, but occasionally
the database has a difficult time keeping up, which is when more 1024
file descriptors are required.
The strenght and weakness of threads (pthread_create) compared with heavy
processes (fork) is that threads share variables including file
descriptors. So I assume that you are not opening 1024 different files
but that each thread opens its "own" file descriptor to the same file and
the problem is that all these "own" file descriptors are shared with all
other threads which really only care about their "own" file descriptor?
Maybe you should consider using heavy processes with fork instead of
threads? Heavy processes have their own variables and file descriptors.

regards Henrik
Chris Vine
2019-11-21 23:15:13 UTC
Permalink
On Thu, 21 Nov 2019 21:33:10 -0000 (UTC)
Post by Henrik Carlqvist
Post by Harold Johanssen
In essence, the threads talk to a database, every thread opening
a connection to it for this purpose. Each active connection consumes a
descriptor. For the most part, 1024 of them are enough, but occasionally
the database has a difficult time keeping up, which is when more 1024
file descriptors are required.
The strenght and weakness of threads (pthread_create) compared with heavy
processes (fork) is that threads share variables including file
descriptors. So I assume that you are not opening 1024 different files
but that each thread opens its "own" file descriptor to the same file and
the problem is that all these "own" file descriptors are shared with all
other threads which really only care about their "own" file descriptor?
Maybe you should consider using heavy processes with fork instead of
threads? Heavy processes have their own variables and file descriptors.
fork() doesn't close descriptors, which is one reason why pipes/fifos
are useful for inter-process communication. By default exec() doesn't
close descriptors either, although of course that can be changed by
setting FD_CLOEXEC.
Henrik Carlqvist
2019-11-24 19:38:53 UTC
Permalink
Post by Chris Vine
On Thu, 21 Nov 2019 21:33:10 -0000 (UTC)
Post by Henrik Carlqvist
Heavy processes have their own variables and file
descriptors.
fork() doesn't close descriptors, which is one reason why pipes/fifos
are useful for inter-process communication. By default exec() doesn't
close descriptors either, although of course that can be changed by
setting FD_CLOEXEC.
Yes, it is true that fork doen't close file descriptors, but at least it
is possible for a heavy thread to close a file descriptor it won't need
without affecting other threads which might depend upon that file
descriptor.

regards Henrik
Chris Vine
2019-11-25 13:52:44 UTC
Permalink
On Sun, 24 Nov 2019 19:38:53 -0000 (UTC)
Post by Henrik Carlqvist
Post by Chris Vine
On Thu, 21 Nov 2019 21:33:10 -0000 (UTC)
Post by Henrik Carlqvist
Heavy processes have their own variables and file
descriptors.
fork() doesn't close descriptors, which is one reason why pipes/fifos
are useful for inter-process communication. By default exec() doesn't
close descriptors either, although of course that can be changed by
setting FD_CLOEXEC.
Yes, it is true that fork doen't close file descriptors, but at least it
is possible for a heavy thread to close a file descriptor it won't need
without affecting other threads which might depend upon that file
descriptor.
I agree so far as concerns particular descriptors. However, if you
retain any worker threads at all in your program, there are real
problems if you want to walk the kernel's fd tree to close all or most
of the pre-existing descriptors after a fork (which if the OP is running
out of descriptors he is probably going to want to do), because
opendir/closedir and getdents are not async-signal-safe and therefore
cannot safely be called after a fork in a multi-threaded program.

But this illustrates another problem with using fork without a
following exec, namely that after the fork and before any exec you
cannot call any a-sync-signal safe functions in a program running more
than one thread. You therefore cannot use fork as an effective
substitute for threads unless you replace ALL your threads with
processes. The further problem with that is that many libraries, such
as GIO/GTK, and I think also Qt, use threads for their own purposes so
that threads cannot be avoided with respect to them. You would have to
be very careful about what libraries you link to in a program which uses
fork instead of threads.
Chris Vine
2019-11-25 13:55:30 UTC
Permalink
On Mon, 25 Nov 2019 13:52:44 +0000
Post by Chris Vine
On Sun, 24 Nov 2019 19:38:53 -0000 (UTC)
Post by Henrik Carlqvist
Post by Chris Vine
On Thu, 21 Nov 2019 21:33:10 -0000 (UTC)
Post by Henrik Carlqvist
Heavy processes have their own variables and file
descriptors.
fork() doesn't close descriptors, which is one reason why pipes/fifos
are useful for inter-process communication. By default exec() doesn't
close descriptors either, although of course that can be changed by
setting FD_CLOEXEC.
Yes, it is true that fork doen't close file descriptors, but at least it
is possible for a heavy thread to close a file descriptor it won't need
without affecting other threads which might depend upon that file
descriptor.
[snip]
Post by Chris Vine
But this illustrates another problem with using fork without a
following exec, namely that after the fork and before any exec you
cannot call any a-sync-signal safe functions in a program running more
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

cannot call any non async-signal-safe functions ...
Henrik Carlqvist
2019-11-26 19:48:31 UTC
Permalink
Post by Henrik Carlqvist
Yes, it is true that fork doen't close file descriptors, but at least
it is possible for a heavy thread to close a file descriptor it won't
need without affecting other threads which might depend upon that file
descriptor.
You therefore cannot use fork as an effective substitute for threads
unless you replace ALL your threads with processes.
Yes, that is true, I should have been more clear that I by "other
threads" meant "other heavy threads" or as you more often say "other
processes".
The further problem
with that is that many libraries, such as GIO/GTK, and I think also Qt,
use threads for their own purposes so that threads cannot be avoided
with respect to them. You would have to be very careful about what
libraries you link to in a program which uses fork instead of threads.
Yes, this is also true. On the other hand there are also libraries with
calls that are not thread safe for lightweight threads.

regards Henrik

Harold Johanssen
2019-11-22 18:46:45 UTC
Permalink
Post by Henrik Carlqvist
Post by Harold Johanssen
In essence, the threads talk to a database, every thread opening
a connection to it for this purpose. Each active connection consumes a
descriptor. For the most part, 1024 of them are enough, but
occasionally the database has a difficult time keeping up, which is
when more 1024 file descriptors are required.
The strenght and weakness of threads (pthread_create) compared with
heavy processes (fork) is that threads share variables including file
descriptors. So I assume that you are not opening 1024 different files
but that each thread opens its "own" file descriptor to the same file
and the problem is that all these "own" file descriptors are shared with
all other threads which really only care about their "own" file
descriptor? Maybe you should consider using heavy processes with fork
instead of threads? Heavy processes have their own variables and file
descriptors.
Those threads access various chunks of data that, if one were to
use processes instead, would force a major code rewrite.
Loading...