large performance impact of high fd limit on oslo.rootwrap
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
oslo.rootwrap |
Fix Released
|
Undecided
|
Dirk Mueller |
Bug Description
When the default of filedescriptor ulimit for "root" is set to a high value and PTI ("meltdown") mitigations are enabled, the python 2.x "close_fds=True" code is excessively slow, because it effectively does something like:
for i in xrange(3, fd_limit):
os.close(i)
leading to e.g. with the limit being 1 million to the equal amount of syscalls, which takes on medium/mid range reasonable hardware 400ms to process.
We could mitigate that by switching oslo.rootwrap for python 3 only, or removing close_fds or reimplement close_fds with slightly more intelligent code on linux (e.g. by traversing /proc/self/fd and looking for actually open fds instead of blindly closing all possible ones). or by just enforcing a low ulimit on fds and leaving the code as is.
Changed in oslo.rootwrap: | |
assignee: | nobody → Dirk Mueller (dmllr) |
status: | New → In Progress |
Reviewed: https:/ /review. openstack. org/607951 /git.openstack. org/cgit/ openstack/ oslo.rootwrap/ commit/ ?id=c0a86998203 315858721a7b2c8 ab75fbf5cd51d9
Committed: https:/
Submitter: Zuul
Branch: master
commit c0a869982033158 58721a7b2c8ab75 fbf5cd51d9
Author: Dirk Mueller <email address hidden>
Date: Thu Oct 4 14:37:25 2018 +0200
Run rootwrap with lower fd ulimit by default
On Python 2.x, a subprocess.Popen() with close_fds=True will .os.sysconf( "SC_OPEN_ MAX")),
fork and then close filedescriptors range(3.
which thanks to Kernel PTI (Kaiser patches) is significantly slower
in 2018 when the range is very large. With a soft limit of 1048576,
benchmark.py reports an overhead of ~ 400ms without this patch and 2ms
with the patch applied. This patch adds a configuration option and
sets a more sensible default of 1024 file descriptor limit by default.
Closes-Bug: 1796267 df8648fc0f37d27 fe9cc6d0563
Change-Id: Idd98c183eca3e2