And as per https://<email address hidden>/msg19266.html,
Squid_MaxFD is set based on ulimit. In the squid config file (/etc/squid/squid.conf), we have:
# TAG: max_filedescriptors
# Set the maximum number of filedescriptors, either below the
# operating system default or up to the hard limit.
#
# Remove from squid.conf to inherit the current ulimit soft
# limit setting.
#
# Note: Changing this requires a restart of Squid. Also
# not all I/O types supports large values (eg on Windows).
#Default:
# Use operating system soft limit set by ulimit.
On a fedora host, when I read /proc/1/limits, I see
$ cat /proc/1/limits
Max open files 1073741816 1073741816 files
Which matches the numbers in your error.
While in Ubuntu, I get
Max open files 1048576 1048576 files
Interestingly enough, in a fedora host, when I check the same value witin a __podman__ container, the value is set to "1048576" as well. While I do not have docker installed in the fedora host I have access to, I believe that if you check that value within your docker container, you will get 1073741816 as well.
Are you using the (Fedora) distribution docker (moby-engine) package or are you using a different provider for your docker setup?
Moving on here, the following workaround alternatives exist:
1) Run the squid containers setting the maximum number of file descriptors to a lower value with "--ulimit nofile=1048576:1048576"
2) Since you are in Fedora, you could use podman (or moby-engine, if that is not the case already).
3) Configure your docker installation to use a lower number of file descriptors.
4) Edit the squid configuration file to use a specific value for max_filedescriptors, e.g., 1048576.
Finally, when it comes to providing a fix for the issue, it seems that this should be discussed in squid upstream. If we were going to set a default value for max_filedescriptors in the squid configuration file, what that value should be? How would that reflect in users having the same issue, but with devices with low memory limits?
Hello,
This seems to be an instance of this issue described in the upstream mailing lists: /msg19232. html
https://<email address hidden>
In a debugging session (https://<email address hidden> /msg19252. html) we see that squid calls
fd_table =(fde *) xcalloc( Squid_MaxFD, sizeof(fde));
And as per https://<email address hidden> /msg19266. html,
Squid_MaxFD is set based on ulimit. In the squid config file (/etc/squid/ squid.conf) , we have:
# TAG: max_filedescriptors
# Set the maximum number of filedescriptors, either below the
# operating system default or up to the hard limit.
#
# Remove from squid.conf to inherit the current ulimit soft
# limit setting.
#
# Note: Changing this requires a restart of Squid. Also
# not all I/O types supports large values (eg on Windows).
#Default:
# Use operating system soft limit set by ulimit.
On a fedora host, when I read /proc/1/limits, I see
$ cat /proc/1/limits
Max open files 1073741816 1073741816 files
Which matches the numbers in your error.
While in Ubuntu, I get
Max open files 1048576 1048576 files
Interestingly enough, in a fedora host, when I check the same value witin a __podman__ container, the value is set to "1048576" as well. While I do not have docker installed in the fedora host I have access to, I believe that if you check that value within your docker container, you will get 1073741816 as well.
Do note that these values are hardcoded. For instance: /github. com/containers/ podman/ blob/v4. 1.0/libpod/ define/ config. go#L92
https:/
Now for why the values are set in that specific number? At least in podman, this was done to match Docker defaults, as seen in /github. com/containers/ podman/ pull/1355, and /github. com/containers/ buildah/ commit/ a2b018430df1e8e 89b23cb5bfa49e4 d3517e1c2d
https:/
https:/
However, later in this limit changing timeline, Docker performed a change on how this limits are set: /github. com/moby/ moby/commit/ 80039b4699e36ce b0eb81109cd1686 aaa805c5ec
https:/
But fedora did set the deamon to use an even lower value: /bugzilla. redhat. com/show_ bug.cgi? id=1715254# c1 /src.fedoraproj ect.org/ rpms/moby- engine/ blob/f35/ f/docker. sysconfig# _7 /src.fedoraproj ect.org/ rpms/moby- engine/ blob/rawhide/ f/docker. sysconfig# _7
https:/
https:/
https:/
Are you using the (Fedora) distribution docker (moby-engine) package or are you using a different provider for your docker setup?
Moving on here, the following workaround alternatives exist:
1) Run the squid containers setting the maximum number of file descriptors to a lower value with "--ulimit nofile= 1048576: 1048576"
2) Since you are in Fedora, you could use podman (or moby-engine, if that is not the case already).
3) Configure your docker installation to use a lower number of file descriptors.
4) Edit the squid configuration file to use a specific value for max_filedescrip tors, e.g., 1048576.
Finally, when it comes to providing a fix for the issue, it seems that this should be discussed in squid upstream. If we were going to set a default value for max_filedescriptors in the squid configuration file, what that value should be? How would that reflect in users having the same issue, but with devices with low memory limits?