We have seen this for many months now. The only workaround we have found is, as mentioned, to reboot when memory is reaching a crash.
The release below did not work.
"Processes that open and close multiple files may end up setting this
oo_last_closed_stid without freeing what was previously pointed to.
This can result in a major leak, visible for example by watching the
nfsd4_stateids line of /proc/slabinfo"
This micro machine on ec2 will soon crash. We don't have that many files on nfs and mostly read from it. We also load a few large files when processes start (700 mb or so, read once).
uname -a
Linux ip-10-48-5-128 3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
855315 855315 100% 0.53K 57021 15 456168K idr_layer_cache
55040 55040 100% 0.02K 215 256 860K kmalloc-16
Is there any way to solve this on the client side by changing how read/write operations are done?
We have seen this for many months now. The only workaround we have found is, as mentioned, to reboot when memory is reaching a crash.
The release below did not work.
"Processes that open and close multiple files may end up setting this last_closed_ stid without freeing what was previously pointed to.
oo_
This can result in a major leak, visible for example by watching the
nfsd4_stateids line of /proc/slabinfo"
This micro machine on ec2 will soon crash. We don't have that many files on nfs and mostly read from it. We also load a few large files when processes start (700 mb or so, read once).
uname -a
Linux ip-10-48-5-128 3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
855315 855315 100% 0.53K 57021 15 456168K idr_layer_cache
55040 55040 100% 0.02K 215 256 860K kmalloc-16
Is there any way to solve this on the client side by changing how read/write operations are done?