Default worker_connections too high for default system

Bug #673366 reported by C Snover on 2010-11-10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
nginx (Debian)
Fix Released
nginx (Ubuntu)
Michael Lustfield

Bug Description

Binary package hint: nginx

The default maximum number of worker_connections for nginx by default is 1024. The maximum number of file descriptors allowed for the www-data user by the OS by default is 1024. Each connection in nginx uses one file descriptor. When there are close to 1024 connections, nginx runs out of file descriptors and starts flooding the error.log (at a rate of about 1MB/sec) with alerts (“accept() failed (24: Too many open files)”). On a busy server, this causes the disk to quickly fill up.

www-data should either have its file descriptor limit raised by default to something higher than 1024, or the default number of worker_connections should be reduced to a number that is safe with the OS default.

Marc Deslauriers (mdeslaur) wrote :

Thanks for taking the time to report this bug and helping to make Ubuntu better. We appreciate the difficulties you are facing, but this appears to be a "regular" (non-security) bug. I have unmarked it as a security issue since this bug does not show evidence of allowing attackers to cross privilege boundaries nor directly cause loss of data/privacy. Please feel free to report any other bugs you may find.

security vulnerability: yes → no
visibility: private → public

I'll look into this further and get the solution back to you as soon as possible.

Changed in nginx (Ubuntu):
assignee: nobody → Michael Lustfield (mtecknology)
Mahyuddin Susanto (udienz) wrote :

how about decrease worker to 512? it's possible? i attached debdiff from nginx package at lucid

summary: - default settings are dangerous
+ Default worker_connections too high for default system

This is not a security issue, agreed.

The issue is that the default limit of the system are set to 1024. This should not be changed. Instead, it should be reduced to take in account for the extra file descriptors that are needed for stdin, stdout, stderr, and log files. For this reason, I feel that 768 is reasonable.

I have committed this change upstream. From now on, the error will instead be:
  2010/11/30 12:00:44 [alert] 17831#0: 768 worker_connections are not enough

This will mean that remaining errors will be due to limits set by Nginx and not by the system. Therefor, it will be up the the administrator of that system to deal with the remainder of the configuration.

Oops, I made a typo in the original report. Sorry!

The speed of log writing in our scenario (with about 1600 concurrent connections) was 17MB/sec, not 1MB/sec. As such, I still feel like this is a potential security issue, since it appears that it is trivial for even low levels of traffic to nginx to quickly exhaust a server’s disk space. A single, malicious attacker could easily cause unrelated applications to crash/corrupt data just by opening a couple thousand connections to a server and waiting for nginx to fill up the disk.

If the speed of the log writing increases with the number of connection attempts (1 alert per connection), fewer than 4000 connections could cause 100MB/s of log data to be written, resulting in disk space exhaustion on an 80GB disk in under 14 minutes. In our case, with 1600 connections, disk space would have run out in about 75 minutes if we hadn’t managed to catch it quickly.

Brian Murray (brian-murray) wrote :

@Michael - Where have you committed this change? Will it appear in a specific version of nginx? Is there an upstream bug that can be watched?

Changed in nginx (Ubuntu):
status: New → Triaged
Changed in nginx (Debian):
status: Unknown → New

Brian: I added the bug watch to this bug report. Thanks for reminding me.

It will be part of 0.8.53-2.

Snover: It's really the job of the administrator to keep track of the system. Logs are a huge part of that. While we should be making sure the package defaults don't conflict with the system defaults, it is not our place to know how many connections you will be handling. If you get into that much load, you should know how to configure your system to deal with it.

The only thing I could think of would be rate limiting in the logs as well as a counter that tells you how often the last message was repeated. I don't know that this would be possible though, given how Nginx handles outputting to a log file.

Either way, this is not a security issue. The same thing could happen if (on a default install with openssh-server) some malicious user attempted to make a many ssh login attempts from distributed systems. The logs would fill up just the same. Of course we use tools such as denyhosts to deal with this. The same should be said (firewalls/rotuers) for Nginx.

“If you get into that much load, you should know how to configure your system to deal with it.”

I don’t feel it is unreasonable to expect that the default OS settings won’t cause the disk to fill up (and, in doing so, cause other stuff to crash) when the default connection limit is reached. In our case, someone started hotlinking to a resource which caused an abnormally high number of connections; normally, this server is very low-load and doesn’t see more than 10 concurrent connections at any given time.

What I am hearing from you is that it’s dangerous and unwise to assume that Ubuntu can provide sane settings out of the box, and it was foolish of me to not plan to receive 100 times the normal amount of traffic. While you are entitled to this opinion, it is not a position that I understand or agree with. If that’s not what you intended, I apologise for misunderstanding, and hope you can clarify.


What exactly do you think should be done so you don't have to monitor what's going on in your own system? The worker_connections will be reduced to 768 in the next version of this package which is scheduled for around Friday.

Do you expect Nginx to tell you how to configure your system? With this change, it'll tell you that in the logs.

Read my analogy again about SSH. It's the exact same thing.

Changed in nginx (Debian):
status: New → Fix Committed

This is being dealt with upstream. It will be taken care of before Natty release. Please see bug 692087 and note the sync will pull in the resolved issue.

tags: added: regression-proposed
tags: removed: regression-proposed

Marking invalid against Nginx without distribution because the source code should not need to consider defaults for every distribution. If installing from source, it's the job of the administrator to know what defaults will need to be changed.

Changed in nginx:
status: New → Invalid

This has been fixed in Debian. I don't see where this bug should be SRU'ed. However, the new version of Nginx could be proposed for backports.

Changed in nginx (Ubuntu):
status: Triaged → Fix Committed
Changed in nginx (Debian):
status: Fix Committed → Fix Released

Marking as Fix Released as this has been resolved in Debian and the package has since been sync'ed to Ubuntu.

Changed in nginx (Ubuntu):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.